text
stringlengths
32k
189k
timestamp
stringlengths
20
20
url
stringlengths
15
660
During the time of the Japanese occupation life for the Swedish missionaries carried on fairly normally. There was mission work to be done, sermons to prepare, and meetings to attend. Dollan continued on with her piano lessons. In the darkness of the early morning at 5:00, Dollan would leave Hua-Yuan (the name of the family's house) in Kiaohsien along with the cook or Farfar's horseman. They walked forty minutes to the gate of the city wall through the city gate, and another twenty minutes to the train station to catch the 6:30 train to Tsingtao. Along the way Japanese soldiers with their bayonets fixed to the end of their rifles would jump out of the bushes close to the Lutheran mission [was this the American Lutheran mission which the Reinbrechts were from?] shouting, "KU-LIUNG!" "WHAT'S YOUR NAME!" The moon would be high or if there was no moon a kerosene lantern reflected off the bayonets pointed toward them. Dollan jumped with fright every time. The cook or the horseman would answer quickly, but calmly as possible, that he was escorting a child of the mission to catch the train for her piano lessons, and they would be allowed to pass. They would come to the heavy city gates, which the guards would be opened for her. Dollan would pass through, and the gate would be closed behind her. In the darkness she would not know what waited for her between the city gate and the train station. She would arrive in Tsingtao two and a half hours later [after starting from her house or starting from the train station?]. During the war passenger trains were not available, so she rode in freight trains to her music lesson. Dollan took six years of piano lessons during eight years of Japanese occupation. Mrs. Rieder was always a nervous wreck with nerve trembles in her face all the time. Perhaps she, herself, was responsible for some of them, Dollan thought. Mrs. Rieder made Dollan, and all her other students, learn Bach, Schumann and other composers by heart. Dollan would sometimes loose her place in the music, and ask, "Where am I?" Mrs. Rieder with a look of sarcasm in her nervous face and voice would reply, "You are in Tsingtao, China!" Not a very pleasant person, Mrs Remitter could be qualified as being 'bitchy.' Piano lessons were difficult, Mrs. Rieder uncompromising. In the winter Dollan's fingers would be stiff with cold as she tried to hammer out her music lessons to the satisfaction of Mrs. Rieder. Even in with the cold of winter Mrs. Rieder never offered Dollan something warm to drink. Later Dollan had a fellow classmate who went with her to piano lessons, Arne from Kiaohsien. His parents were from another Swedish mission, but lived near Peking without a Swedish school. Fortunately, Dollan was not required to take music lessons during the summer. Dollan thought that perhaps Mr. Rieder taught something, perhaps German, in one of the local schools, but actually he was with the Neutral Nations Supervisory Commission for Switzerland. 1.1 Short, chubby, bald with a zero personality, young Dollan thought, she did not see him often at all. Either he was in an other room or at work. He spoke poor Chinese she remembered. Swedish school continued on for the missionary children in Kiaohsien. Dollan took chemistry, but because the Swedes did not have a chemistry lab, so she took chemistry at the high school from a Chinese teacher who taught in Chinese. The chemistry lab really did look like chemistry lab with test tubes, and beakers, other glass containers and burners. The Japanese did not mistreat the Swedes. They were neutrals and not subjects and the Japanese respected that. Other nationalities2 such as the Americans were subject to suppression, and imprisonment.3 The Swedish neutrals were required to wear white arm bands 4 with the Chinese characters "Swehdian Gwe" (Swedish Country) printed upon them. All the other nationals also had white arm bands with their respective country's name, which in this area were usually Swiss and Russian [how about German?] Women were required to wear slacks rather than skirts for the odd reason that during air raids they would be able to run faster. To travel from city to city such as Kiaohsien to Tsingtao a person had to carry a special pass issued for the purpose, and also inoculation slips to prove that you had gotten the required shots against disease. Neither Oscar or Dollan remember the Japanese visiting the house. The Japanese were always nice, and always more interested in talking to the children than anyone else. Neither did the Rinells have any dealings with the Japanese in town and didn't have to report to them at any time, except on one occasion. A Japanese officer came to the house, Oscar talked to him and '"led him to the Lord" while Hellen and Dollan were praying in another room. The Japanese officer becoming a Christian was an unusual event. Sometimes the Japanese helped the Chinese troops against the bandits in the surrounding areas. But this was not often. Oscar mentions that later when the Japanese were to turn over Tsingtao to the Chinese the Japanese were hoping that the Chinese bandits surrounding Tsingtao would spoil the whole agreement. The Chinese talked it over with these bandits, gave them some bribe money, and the Japanese plan didn't work out. If one was sick at all or looked sick, they were not allowed on the trains at for fear of spreading the disease to others. The Japanese also required everyone to have taken particular inoculations against disease, and to carry papers proving they had received the inoculations. At train stations Japanese were on duty to inoculated anyone who had not had their shot. Same needle and same syringe used for all. One day little Johnny had forgotten his papers. In desperation he rolled up the sleeve of his shirt and slapped a band aid on his arm. At the train station he was asked for his papers to prove he had received the latest required shot. He rolled up sleeve, showing the official where he had presumably been given his shot. He was allowed to continue on. During the war years inflation ran out of control. It took a whole suitcase of money to buy even everyday food items. The Rinells bought good wheat on the market and stored it in boxes as a hedge against the inflation. When the wheat was finished they bought other items immediately before the prices went up. Sometimes they bought gold units, which were very easy to hide, or Yan She-kai silver dollars. Later when the Communists defeated Chiang, this was not even allowed. In the last few years of the war, little money was able to reach the Rinells from Sweden. Chinese workers for the mission would give the Rinells millet and rice until they were able to get money of their own. Along with the rice and millet they ate 'di gua' or sweet potatoes. In they got up in the morning they had di gua for breakfast. When they came in for lunch, di gua. For supper, di gua. The di gua was prepared in different ways for variety. "Di gua ger" was di gua slit down the middle and baked in the sun. It was also prepared as mush for breakfast, boiled for lunch, and baked for dinner. For the latter the sweet potato would be pricked a few times with a fork and baked for about an hour or until the peel loosens from the sweet potato. The latter was the best way to prepare it.5 Things could have been worse. The di gua was deliciously sweet and filling. Dollan' classmate who lived with the Rinells for a time came in one day to Dollan saying, "Guess what we got for breakfast!? Di gua!!!" On one occasion the Rinells had a break from their monotonous diet when Dr. Neastrum, a professor at Shense Tiwan-foo, brought money back with him from Sweden on one of his yearly trips back home via Siberia. In the spring 63 Dusty Reinbrecht believes they were taken (or sent to?) Weihsien camp in the spring of 1943. Email from Dusty Reinbrecht to LJH, February 21, 2008. the Reinbrecht family were staying in a hotel at Iltis Huk where other nationals of the allies - Brits, Americans, Dutch, French and other nationalities - had been take [or required to go?]. One day Japanese army trucks pulled up. Soldiers demanded they all pack their suitcaes. Everyone was to be taken away. Georgie does not remember being scared at the time, but her parents were. 72 It was later that Georgie's parents related how scared they were at the time. Email from Georgie Reinbrecht to LJH, February 27, 2008.The soldiers allowed them to take only what they could carry, which amounted to two bags suitcases each. In the front yard, the soldiers emptying the contents of each bag, removed some things. Georgie saw a soldier removing a flashlight. The Reinbrechts and the others repacked their bags, and the soldiers ordered them all into the trucks. The soldiers drove the trucks to the train station [where is the train station?]. They were all ordered on the train bound for Weihsien and interned at the Presbyterian compound, which had been transformed into an internment camp. A member of the Swiss consulate would made at least a few visits, traveling to Weihsien from Tsingtao. Accoring the Georgie Reinbrecht there may have been times when the consulate added items such as medicines to the their list of permitted items thye wre allowed to bring into camp, possibly adding items to blanks spaces on the list.73 Email from Georgie Reinbrecht to LJH, March 11, 2008. As far as Georgie remembers the consulate official came one or twice though there may have been more times that she does not remember. "The [consulate official] came for a visit maybe once or twice but that also was difficult because if he came [he] might not get back [to the consulate], because often train lines were bombed. But he certainly did what he could for us." 74 Email from Georgie Reinbrecht to LJH, March 13, 2008. For third party photographs of the camp see: http://personal.nbnet.nb.ca/sancton/index.html. On Tuesday, June 1, Hedvig visited Hellen at the hospital in Tsingtao. She was doing pretty well, but any surgery would not take place for a while. Oscar also has been checked. His blood pressure was 108 (Mom said this reading is low and wondered if this was the 'top' reading.). He needed rest. The doctor suggested two months. Probably Hellen and Oscar would soon move to Iltis Huk. Hedvig went out to visit John's grave in the cemetery behind the hospital on the hill. The grave was well taken care of. It was peaceful.11 "Out there there is no war." It would be nearly two years since he left her. "Cry not my heart. We will soon meet!" she writes. Egron and Oscar stopped by for coffee. Egron also looked tired. Everyone looked worn out that spring. Wednesday was another rainy day, and so dark. Getting to school was not easy. Little Margareta was six months old. The little one looked very happy. Gerda left Margareta in the care of those at home and went into Tsingtao to help Oscar open the cottage at Iltis Huk and get it ready for summer. (Someone would always go to the cottage early to prepare it for the rest of the family). Because of his health Oscar would need to take his vacation early. Egron could hopefully follow soon. On Saturday Hedvig puttered around the house all day and dusted. Krankenschwester Friedel12, Mrs. Matzat,13 and Streker, an American missionary were coming to dinner. They celebrated Margareta's birthday. It was good to have Friedel come visit. She was so good when Hedvig and John Alfred were ill two years before. In the afternoon they had coffee in Nanguan (north yard). Ester Wahlin also joined them. Poor thing had a bad cold. On Sunday June 6 Hedvig was in church a good deal of the day. Pastor Han's sermon this time was Jesus' statement "I'm the light of the World." At 5:00 PM she attended a women's meeting. The reports given on various mission endeavors were interesting and there were many people to pray for. Still, with many mission enterprises and prayer, the Chinese still did not come in any great numbers to the Lord, she writes. Friday was also graduation. Seven young men and two girls graduated from mission school, and three men and two girls graduated from Bible School. Mr. Schultz gave a long speech on always being cheerful, prayer without thinking of receiving anything, and thanking God. Mr. H. Lindberg also spoke. Egron led the meeting. Dinner followed at Egron and Gerda's. Mattis (Mathilda Pearsson) who had been visiting traveled home with her girls. Schultz also went home. Word had also come that Dollan must rest because her heart was not good. Now father, mother and daughter were all on sick leave. The stress of the war combined with little or poor food caused many to become sick. Common at this time was strep throat which led to heart problems. Hedvig appreciated Dr. and Mrs. Eitel. They were such 'heartful' and good people. While Hedvig was in Tsingtao Dr. Eitel took Hedvig's blood pressure. It was 184 over 71. Her heart was good and normal, especially considering her age. Three years ago her blood pressure had been 175 and in July she weighed in at 59 kilos. Hellen had another attack of hives. In another week she was also to have surgery. Pastor Han spoke again at the service on Sunday, June 21. It was a pretty good sermon on Ephesians ['Ef'] 2:1-10. Little Margareta kept Hedvig company. Gerda and Egron were in Gaomi congratulating Martin on his fortieth birthday. Egron and Gerda were thinking of traveling to Tsingtao again. Bible class had been postponed another week. Hwanghsien brothers16 had promised to come by July 3. [Maybe they were to speak at the Bible class?]. Two days later it was Midsummer Eve and in Sweden it was also All Youth's Day. "Here there is really nothing that is happening." Hedvig was missing the midsummer festivities of Sweden. China had none of these. She is thinking again of traveling to Tsingtao. She is also worried about Hellen. The next day, Egron came in with letter from Sweden for everyone, though none came for Hedvig. Pastor Edin who was 69 years old and and Pastor K. W. Beckman who was only 51 years of age had passed away. That means Maria Edin was now alone too. Hedvig could understand the loneliness she must be experiencing. Hellen still has not been scheduled for an operation. Her hives condition was still not letting up. A letter finally arrived. This one is from Stensnäs (a place in Sweden). Augusta was still well at 86 years old. He was bedridden, but sat up six to seven hours per day. The relatives are well and everything is peaceful and quiet in Sweden. With the war going on in Europe, Sweden had frozen all prices though things were a bit more expensive now than before. On June 28 Hedvig and Egron arrived in Iltis Huk. The next day they visited Hellen. The operation had finally been done, and she was doing well though she was of course experiencing pain, and she had a fever. Iltis Huk was cool and nice unlike the hot interior. [Some days later perhaps - check diary] Oscar, Dollan and Hedvig went to the graveyard with some flowers to put on John Alfred's grave. At 9:40, the time of John Alfred's death, Oscar prayed and thanked God for supporting them all these last two years, "but we still miss our husband and father." On July 3 they received a letter from their daughter Margaret and her husband Roy, dated October 12 of the previous year. Roy was still employed by some company and both were in good health. "How wonderful to hear from them. God is good and hears prayer," Hedvig writes. On the same day that Hedvig received the letter from Margaret and Roy, [If possible check to see if this letter arrived on the anniversary of John Alfred's death or on the day of his death in 1941. Probably the former. LJH]. In the evening of July 7 Dr. Eitel drove Hellen came home from the hospital from Tsingtao to Iltis Huk [probably] in his own car. She was looking much better. Everyone was very thankful that the surgery was finally over with. [Mom, could you look over the line on July 7 regarding 'thin gravel'. Don't understand that full sentence]. Three days later it was Dollan' birthday. Gerda and Johnny came over to Iltis Huk to congratulate her. The Eitels were over for dinner. Because of the difficult times, the Rinell family often did not have other people over. , and so food for entertaining, was scare. But Mr. and Mrs. Eitel were easy company to have over. Dollan does not remember much about her birthday except they all talked a lot. 17.1 This year would be a turning point for young Dollan. She had graduated high school. A new chapter in her life was starting. Hedvig got back home to Kiaohsien in time for the morning service.18 Pastor Tsang preached a good sermon on Revelation 1. In this passage God was coming to get his own people where they sat forgotten and tired on the island of Patmos. [Actually this chapter does not talk about the people sitting on Patmos - though John was - and being forgotten and tired. Though the next chapter talks about the trials of some of God's people though not on Patmos]. The sermon was uplifting. On July 14 Brothers Tsang and Teng "were our own dear guests for dinner." They sang songs and prayed for the Rinells. On July 20 three large bundles of Swedish newspapers arrived from Mrs. Blomdahl, a missionary from another mission whose husband had been shot and killed by communists the previous year. The Rinells read them carefully and with joy. The war was going full strength. This was in stark contrast to Shantung where everything was peaceful and quiet. Only one Vecko Posten arrived with the bundles. That was a shame. This issue told of the passing of four pastors back home: Bäckman, He___, Fritiof Pettersson and Eric Ohlson. On Sunday, July 25 a good letter came from Vilhelm [Mom, who is Wilhelm?]. Everything was well at home. One surprising event though, Mussolini had resigned. What did that mean for the war, Hedvig wondered. Oscar was down with a bad cold, sore throat, high fever and pain in the joints. Dr. Eitel again, untiringly, was by to see what he could do to help. The Germans in Tsingtao were worried about the war.20 They now believed that Germany had lost. The relationship between the German missionaries and the other missionaries were good. All the German missionaries were friendly though not really cozy according to Dollan. Some missionaries were Nazis. July 28: Hedvig and other Rinells attended a ''good tea' at the Eitels. July 29: No apparent change on the war front regarding Italy. Mussolini's resignations was apparently only an internal issue. Oscar was getting better. Hellen was very tired. July 31: A business meeting was held. August 1: It was Nina Fredrickson's birthday. Nina was a 'spinster' missionary. [Mom what is the funny story in language concerning Nina?] Oscar continues to improve but is becoming so thin. Monday, August 2 was Hellen's birthday. The day was marked by good coffee, happy guests (no doubt partly due to the good coffee) and flowers. Though a good day, Hellen was still feeling tired and worn out from all her illnesses. And, she had to go in again for more treatments. On Wednesday evening three letters arrived from Sweden. Two were about Aunt Augusta. A new stroke had ended her busy life on May 2 at the age of 87. She had now reached her goal. She had a been a warm and a good Christian who had left many good memories. Her brother and sister had come home [after hearing of her death?]. August 6: They had a good time at a tea party at the Scholz home. August 7: Landin wants to have to have a Swedish school in Peking. Hedvig doubted that people would want to move there. August 9: It has become apparent that no one is interested in moving the Swedish school to Peking, so nothing will come of the idea. Hedvig promised Arne and Robert22 that they can live with her if they decided to return to school in Kiaohsien. Thursday, August 12: A Japanese soldier swam too far from the shore and was eaten by a shark. They searched for him with boats and planes but could not locate his body. At another time three were eaten by sharks off the German beach. The Chinese Christians say that the Lord is punishing the Japanese because they made the church into a military installation. To Hedvig though the event was simply horrible. As it was the Japanese navy scouts left here after just a few days rather than a month. Perhaps they had more to fear than just sharks. The Japanese soldiers had been acting a bit jittery of late. Saturday, August 14, brought more wind, fog and heavy weather -- heavy for both personality and nerves. Hedvig chided herself for the way she was feeling. After all what was a little bad weather when everyone was reasonably well. "Yes Lord, you are good. Why should one let small things bother us when one should thank and praise Him." "But we are that way," she admits. Sunday everyone celebrated the birthday of Anna Andersson who was thirty-five years old. Also two issues of the Vecko Posten arrived allowing everyone to catch up on some news. Tuesday was a day of repair to be done around the Iltis Huk home. Lutheran missionary Schön will be visiting in the winter. Leanders would probably be their guests in September. The day before, a one and a half year old boy died of dysentery. He was the only child of a man who was presently in Burma. Wednesday, August 18 Gerda and Hellen traveled to Kiaohsien [to do something - can't make out what they were traveling back for - could you check this Mom?]. It was a bad night and nerves were a bit frayed again the next day. Again, Hedvig was fighting feeling down and depressed. "Why should one let little things irritate us when the whole world is suffering and bleeding." On Friday they had the 'sweet' company of C. Silverbrand23 from Tientsien. With the Silverbrands, the Eitels and themselves they arranged a small picnic. On Saturday together with the Silverbrands they had coffee with the Andréens [Mom, did they live in the area?]. Their topic of discussion during the visit was a religious one: who will roll away the stone from Jesus tomb? On a personal level they saw that we all have 'stones' blocking their view. But Jesus is willing to remove them. On Sunday the Rinells went to visit John Alfred's grave. Egron and Oscar took photos of the tomb stone with people gathered around it. Gerda and Hellen sang a song and Oscar prayed. Beautiful lilies were placed on the grave. It was memorable grave-side service for Hedvig. The cemetery attendant promised to save Hedvig a spot next to her husband. She would never have a chance to use it. On Monday Lindberg turned seventy-eight. He had spent a good number of years working in China. Anna Andersson and those from Kaomi left for their respective homes. On Friday August 27 Pastor Han came to Iltis Huk and reported that the Electric company blew up in Kiaohsien. The maintenance man had gotten drunk and forgot about the 'heating place'. Two people died and a few were injured in the explosion. One person escaped without injuries. The explosion destroyed the building at the plant and several houses nearby were damaged. A pastor's conference had occurred [under James teaching(?) Note: diary unclear]. Four 'J' pastors were present. Hedvig mentions that they [perhaps the government] were attempting to get the 'Emperor's Cult' started amongst the Christians. This did not look very promising for the church. Saturday, August 28. Everyone started to pack. 'The quiet wonderful rest is here.' [Perhaps they were going on holidays]. The strong heat of past weeks had let up and stark north winds were blowing. The skies were cloudy. The change in whether seemed to mirror changes in the world political climate. Word arrived that two Swedish ships had been sunk by German bombers. Sweden had protested to Berlin. Turkey was mobilizing for what reason Hedvig didn't know. Even in far off China, things of Europe were of concern. Monday, August 30. Egron's family arrived in the morning. Ginsburgers [Mom, French neighbors?] came over to Hedvig's place for coffee bringing with them Mrs. Verber. Li-po-Ren and Miss Yang also joined everyone. While they enjoyed relative peace, there was much to be concerned about in Europe. The news earlier in the day said that the king of Denmark had been arrested and Hitler's government was taking over. King Christian himself wanted to sentence the soldiers who didn't withstand(?) Hitler. [Confirm this previous sentence with Mom. Unclear.] Hedvig wondered when Hitler will be sentenced for his crimes. Also the news mentioned that King Boris of Belgium died suddenly the day before yesterday. Wednesday. September 1. They arrived back home again [from Tsingao?]. It was fun to be back. Though one of the best of the goats was ill. Thursday, September 2. Oscar and Hellen came home unexpectedly. They thought it best to do so. They were able to get past the blackout and the 'long pants' [mom, who are the long pants?].6 Doris does not know who long pants were. Friday, September 3. School began. It was nearly impossible to find room in the church. There were about fifty new students in the mission school, not many in the bible school, but the grade school and grammar school were packed with more than usual number of students. Though Hedvig was always interested in education, evangelism was never far from her mind, 'God, help us to win them for you," she wrote. Egron preached a powerful sermon on Philipians 3:8. More bad news though rumors had it that war between Sweden and Germany was brewing. "God protect our home land," Hedvig writes. Monday, September 6. Hedvig has begun teaching her first class. She has been asked to teach "Harmony of the Gospels". It will be a lot of work considering she hadn't had this course for years. With the mission school boys she had Galatians and 1 Corinthians. The Chinese had more news for the missionaries: the English and the Americans had landed troops in Italy. Tuesday, September 7. Maj Brit A. only nineteen years of age 'took high school' from the Swedish high school in Kiaohsien in the spring and got engaged, and got engaged to a Mr. Dahlberg at midsummer. Hedvig didn't seem to think this too young. She was the daughter of Swedish missionaries from another mission. A bit of rain fell. But not enough to do the harvest any good. Food would become more expensive Hedvig thought. Wednesday, September 8. Someone sent Hedvig $15 which was not too small an amount in those days. It was not designated for anything in particular. She could use it for whatever she wanted. It would be enough to pay for tuition for Goa-Su-chong [mom, who is he/she?]. News in Europe was heartening. The British had taken four cities in Italy. The Russians had gone through at the Dneiper. Bad news though in Peking were a bad cholera epidemic was spreading. Arne Bergman was a guest for the night [perhaps the night before]. He arrived happy and well. Robert was expected on Wednesday. Leander and R. Erickson were [or were to be] visiting Tsingtao and then Kioahsien. News arrived from Göteborg that Mormor [Hedvig's mother - name was Moster Alma?] was very sick with cancer lumps under the arm pit and had to go to the hospital. The doctors were to begin with radiation and later perhaps surgery. Hellen and Dollan sat crying at the opening of the school. [Many years before mormor Ida also had cancer but Doris things she had a mascectomy]. K. Solberg was almost killed by poisoned gengas [producer gas in English. What is that?] . Ingrid Andreén spoke at the opening of the school. Her talk was short and sweet according to Hedvig, who never seemed to like long sermons. Sunday, September 12. Hosea Swensson spoke, again 'short and sweet.' He [and his wife?] later had coffee at Hellen's. Ando [perhaps his wife] was gray and bent. Anna looked pretty good. They finished their time together with song, bible verse and prayer. Friday, September 24. Hedvig and Oscar traveled to Hangia tsuen to attend Gi-Da-sans funeral.9 Doris does not know who this person is, but thinks she was probably a 'bible woman' connected to the Swedish Baptist Mission. It was a simple as it could be considering the rain and ? [can't read your handwriting mom]. Around the wake stood her four sons with their wives, three daughters, and all her grandchildren She had been baptized for 3 years[mom, is this the correct translations of this sentence?]. "Peace over her memory," Hedvig writes. Hedvig wrote a piece for the Vecko Posten about Gi-Da-sans' life. Sunday, September 26. It rained during the night before and during the day. Despite the weather Oscar went to Long-gia-tswen anyway. Sunday, October 3. Pastor Han gave a good sermon as usual. The text was about God's promise to Abraham about his 'seed' and the sands of the ocean. His ideas were well thought out. The Swedish school invited everyone to Svensk Afton or 'Swedish Eve', which was a evening dedicated to all things Swedish in food and language. The invitations were well done with faces of the missionairies cut out from photographs and pasted on the invitation. No one was allowed to speak English or Chinese the entire evening. Everyone was required to speak Swedish. Whoever broke the rule had to pay money into the kitty. Sten Lindberg's father who was very 'proper' and formal heard someone speak English mixed with their Swedish. He went up to walked up with a serious look on his face to enlighten the transgressors of their infraction of the rules of the evening and said, "Du skall inte mixe Engelska med Svenska. Du skall talle straight.' Without thinking he had thrown into his Swedish sentence two English words. The translation of this sentence is "You will not mix English and Swedish. You will speak straight." Everyone, young and old, laughed until they cried. The rest of the evening everyone just put in an English or Chinese word when they couldn't remember the Swedish.10 Interview of Doris Brown by LJH via recorded phone December 2007. Later [or before?] everyone did their best to recite (or read aloud) Victor Rydberg, a Swedish author, and to have their fill of Swedish food. Wednesday, October 6. Teaching (or homework) during the day. They (Hedvig, Egron and whoever else) had Dr. and Mrs. Eitel over for dinner. Dinner was followed by a lot of singing, bible study and prayer led by Egron. Everyone had a good time. Friday, October 8. The teaching (or homework) was difficult. But now it was all over and everyone had a week off. Hedvig hoped that all the teaching was not in vain. The day before a letter arrived from Sweden that Mama Colldén (Mom, is this Ida?) had an operation for breast cancer. Sunday, October 10. Double 10 day, Chinese Fatherland's Day. But never had China been so broken up Hedvig thought. Poor people in a broken country. Oscar, Hellen, Egron, Jansson and Pastor Lee from Kaomi have traveled to Hong-shi on inspection.They found everything to be OK. Wednesday, October 13. "Again, God has given me a birthday," Hedvig writes. "All in grace from beginning to end." Hedvig was congratulated by her family and the Ki family. Later everyone had 'mian-tang' or Chinese noodles, and cake and coffee. After eating everyone was happy and content. Best of all was the service afterwards. Thursday, October 14. Egron, Gerda and Oscar went to Hedvig's for a short visit the day before. The missionaries also got Russian news. They were winning over the Germans! Friday, October 15. Not much rain had fallen lately (Len, check to see rainfall in this part of China during this time). The land had been dry and thirsty. But, last night a nice rain had fallen that continued on into the day on Friday. The rain gave hope for a good wheat harvest. Tuesday, October 18. Oscar arrived back from his trip to Kaomi. There a new church had been started for almost 300 members in Da-Tsingia-choong. The members were called 'Wei Bin' after a river that runs through there. The church services were good and Martin Jansson was installed as pastor. Friday, October 22. A birthday congratulations arrived for Hedvig from Gothenburg. She was pleased to be remembered back home. She wrote a letter in response. The week's lessons were over. It was a scary time for Hedvig. She didn't know if this 4th class of the bible school would accept her teaching. "God help both me and them," she writes. Two students left the high school to go west. Among them was Hedvig's student Wang-lien-sho. "May God protect them," she prays, because much suffering will come to them. Sunday, October 24. Oscar preached a wonderful sermon on John 8:31-36 (may be 'Johannes, if so would this be Revelation?). The 'truth shall make you free' was the message. Oscar looked pale and tired. Afterwards several people got together and celebrated Esther Wahlin's birthday. Among those at the celebration were Mrs Shultz, Mrs. Matzal26, Nina, Stretcher and two other ladies. Wednesday, October 27. Gerda was operated on for appendicitis and uterine prolapse [Mom, what is that?]. Hellen stayed with her. Egron returned in the evening. Dr. Schmidt and nurse Gerda were kidnapped by guerillas from Braunes House on Laoshan Mountain. Mrs. Schmidt though not kidnapped was shot in the thigh. Dr. Eitel had to travel at 3:00 in the morning to bring her home. She had bled a lot and was in shock. To save her life Dr. Eitel gave her 600cc of blood. 'They' sent out an ombudsman to talk to the guerrillas. No doubt they wanted money. Ironically, the story was the Dr. Schmidt and nurse Gerda were having an afair, so it was quite convenient that they kidnapped together. The were both held several months and eventually let go.11 Recorded telephon interview with Doris Brown by LJH December 5, 2007. Saturday, October 30. Hedvig was a the women's conference in Kaomi. It was a peaceful and good time away. Nina and Wang-wan-hsien (sp?) spoke at the conference about the dress code (sure would like to know what the code was ! Len). Tsaing-mo-she spoke on how Mary poured oil over Jesus feet. Cheo-gue-sio (sp?) and Mrs Chao both gave speeches 'with a lot of frills' according to Hedvig. Mrs. Chen gave a lecture on Baptist history to 1654. Gerda was doing much better. Sunday, October 31. It was a wonderful Sunday. Mr. Chien (sp?) from Hwanghsien was led by God in his preaching, and he preached twice. Gerda was improving very well. Mrs. Schmidt had three blood transfusions and had regained consciousness. What was depressing though was that there had been no news from Dr. Schmidt and the nurse. Hedvig had ???? Chin, Ki and and Hes (sp?) for dinner. They had a great time. Tuesday, November 2. Everyone was back home after attending the conference in Kaomi. It had been a good conference. Now they were going to have more 'work-related time' with the guys from the north, three people are members of the seminary board. It was Egron, Principle Wang, and Pastor Kung [who was later murdered]. If they have a committee Oscar and two more brothers would join. Sunday, November 7. Hedvig was thinking about the war and the apparent friendship the Germans and Japanese had. It was ironic. Twenty-nine years ago had been taken by the Germans. The Germans in turn lost to the Japanese. Now they were friends. How deep that friendship went no one knew. The Sunday was peaceful and quiet. Oscar lectured on church membership and done an excellent job. Hedvig thought that he should really be the Bible School teacher rather than herself. Principle Ki though committed to teaching religion in the grammar school again, thought it better to wait for the Swedish-Chinese conference. Hedvig agreed. Monday, November 8. Egron returned home with good news about Gerda. Each day she was growing in strength. But now things were not good with Hellen. Dr. Eitel was not satisfied with the sound of her heart. Two Japanese came with interpreters to the Bible School and to Egron's. There was no news about Dr. Schmidt and nurse Gerda. Thursday, November 11. Both Gerda and Hellen returned from the hospital in Tsingtao. Both were doing pretty well. Oscar's blood-pressure was too low, however at 112. His heart was not good apparently. And what is more, he had a busy month coming up with poor food while traveling. It would be better if he could take time off. News arrived that the Americans had taken the largest island, Salomon, but at the cost of many ships and planes. But according to the Japanese (my assumption, LJH) they (the Japanese) had not lost very much. It was difficult to tell if this was the truth or not. All flowers were taken in the first frost the night before. Sunday, November 14. Sunday was a good day for Hedvig. Pastor Sia had as his text Acts 1:8 which was about the strength of the Holy Spirit. In the afternoon he spoke about true people and true words. After his sermon he asked non-christians to come forward for prayer if they wanted to. A good crowd of middle school boys came forward and kneeled at the alter and they were prayed for. "May their salvation be true and not uncertain," Hedvig writes. Oscar had to travel to Wangtai using the John Alfred's old cart. The car battery was dead. He did not have a lot of strength to make the journey. Tuesday, November 16. The meetings lasted for another two days. God was working among the people, but the Christians longed for more work of his Holy Spirit. Hedvig wondered if they themselves were keeping God from working because of the selfishness in their own hearts. "Light a fire in their hearts," Hedvig told God, "and burn up all apathy in our hearts." On Wednesday, December 15 John Alfred came into the room with a letter from Edith. It was a big surprise. It had been so long since they had heard from Edith. The markings on the letter30 indicated that it had gone through the customs house in Chungking. "How lovely it is to see her handwriting again and to know she is OK," Hedvig writes.31Hellen was due to travel the next day [back home I presume. LJH]. December 18: Oscar, still not feeling well had to go in for more tests a few days before. Dr. Eitel had to pump out the contents of Oscar's stomach to see if the stomach contains enough stomach acid which he doubts. That he thinks is why Oscar's stomach is not functioning properly. And, this may be due to his nerves. All this sickness worries Hedvig of course. "Father Jesus come our rescue," she writes. December 23: Hedvig receives a big shock. Koang -hoa-feng paid back his debt of $4000. Hedvig had received now payment for the last two years. "That's how it works sometimes," she writes. Sometimes you don't have money and sometimes you unexpectedly do. But no matter, "my salary is in heaven." Christmas eve arrives and everything is prepared though this year Christmas eve celebrations were at Ester [Wahlin] and Anna Jansson. Hopefully Oscar would be strong enough to join them. December 31 [Friday] arrives and Hedvig records the last day of the year in her diary, "last evening for this page," she writes, "and the year's tears. So many beautiful dreams and wishes never came." [though she doesn't tell us what these dreams and wishes were. LJH]. The last line perhaps made her think she may not be feeling thankful because she follows it up with, "And yet how much love we gotten [passed tense?] every day from our Father's caring and loving hand. Thank you God for all you have given." The missionaries could go through very difficult times, but still thank God for his loving care. In the very next paragraph of her diary Hedvig mentions how Gerda is sick but is better. Oscar is trying to rest but is frequently visited by Chinese, probably well-wishers who wish him the best, but keep him from needed rest. Dollan and her mom had to take a train to Tsingtao for a doctor appointment. The only room available was on a freight train. They had gotten on and the train proceeded down the line, but stopped another station to let on passengers. People were eager to get on the train due to get away from guerrilla and/or Japanese fighting. If they couldn't get into a train car they tried to climb onto the roofs of the train cars and hang on. A high school student by the name of Eivor, a Swedish girl from another mission, went in for a tonsilectomy and died on the operating table.7 Best guess is the year was 1943. She had come to Kiaosien because the Swedish Baptist mission there had the only high school. Arne Bergman, Dollan's and Eivor's fellow student, brought her to the hospital. Not long afterward Dollan was called over to Egron and Gerda's house where they broke the news to her. Eivor was an 'A' student in the high school who hardly had to study at all for her classes. She is buried close to John Alfred Rinell. 1. Mrs. Rieder's husband was with the Neutral Nations Supervisory Commission for Switzerland. Years later Oscar met him in Korea. 2. See book entitled The Shantung Compound. 3. Oscar visited the compound once or twice. See newspaper clipping of his visit there. 4. Mom, was everyone's arm band white or just the Swedes? 5. Many years later in America Dollan still prepared sweet potatoes the same way. 6. Shantung Daily News, June 21, 1943. 7. Years later in Sweden Dollan took care of King Gustav V who had come into the hospital for x-rays. 8. Thou Lord Art My Rock, May 20, 1943. 9. Thou Lord Art My Rock, May 20, 1943. 10. Thou Lord Art My Rock, May 23, 1943. 11. She notes that Mr. Boosen's grave lies right behind Mr. Kroghs grave. She probably mentioned that fact as a 'landmark' to finding John Alfred's grave. Doesn't matter now. All the gravestones were used by the communists for sidewalks. 12. Schwester Friedel was Dollan' head nurse later when Dollan was in training. This was when Dollan had turned 16 and the Nazis were out of the hospital. Up till then the Nazis were the only ones who could train nurses. The Swiss then took over and Dollan started nurses training. Friedel, Dollan says, was a nice lady. Also was she Swiss? What led up to you going to nurses training? Describe how it was when you first started. 14. Dollan says they are still living as of June 1995 though she has alzheimer's. Their son lives near Linnea church in Gothenburg, Sweden. 14.1 Email from Dollan Brown to Lennart Holmquist, September 8, 2007. 15. In the 1990's at least they were still alive and living in Sweden my Linnea Church. Dollan baby sat their cute son, Göran who is very nice and also lives in Göteborg. The older Andreens did not speak Chinese or at least not very well. Their mission station was Churching and then Gaomi while they were in China with SBM. Later in Sweden the older Andreens both had Alzheimers and died in an old peoples home. Göran married and lived in an apartment a few doors down from Linnea church in Gothenburg, Sweden. He worked in an art store. He would visit Oscar at his cabin called Sulatorp in Sweden, always bringing orchids to Oscar when he would visit. See email from Dollan Brown to Lennart Holmquist, September 12, 2007. 16. Dollan thinks these brothers may be evangelists, but she is not sure. 17. Thou Lord Art My Rock, June 22, 1943. 17.1 Email from Dollan Brown to Lennart Holmquist, September 11, 2007. 18. Hedvig implies she got home on the 12th, but she actually would have gotten home on the 11th if she made the morning service on Sunday which was on the 11th. There were no Monday morning services according to Dollan. 19. [Len, check this out in WWII Euro history]. 20. The only German missionaries in Kiaohsien were Miss Strecker, a Lutheran missionary. 21. Thou Lord Art My Rock, August 9. 22. Arne was at Oscar Rinell's 95th birthday in Sweden. Arne's home was/is in Linköping. Robert had already passed away. 23. Dollan doesn't know them. 24. Len, check the original paragraph in Thou Lord Art My Rock and make sure you've understood this correctly: September 29, 1943. 25. Thou Lord Art My Rock, September 29, 1943. 26. Dollan dated her son. Mrs. Matzal was later killed when a drunk driver hit her when she was walking. Dollan helped Helmut get her ready for burial. 27. Thou Lord Art My Rock, October 30, 1943. 28. Thou Lord Art My Rock, November 5, 1943. 29. Last sentence is my own. Len, check the Tsingtao Times to see if it was indeed mentioned. Also, check spelling of town's name. 30. I'm assuming the markings on the letter said this. Len. 31. Thou Lord Art My Rock, December 15. 32. Apparently Hedvig prayed every morning because she writes here "That is my morning prayer this morning." (Though she doesn't say what the prayer is). 33. Dollan thinks that she was about 15 when this incident occurred which would put the date about 1943. comments, your own personal experiences, photos or suggestions. Simply click on the word EMAIL to contribute.
2019-04-24T11:46:55Z
http://weihsien-paintings.org/LennartHolmquist/(1943)%20Piano%20Lessons.htm
The aggregate market value of the common stock held by non-affiliates of the Registrant, computed by reference to the closing sale price on The NASDAQ Stock Market as of the last business day of the Registrant’s most recently completed second fiscal quarter, June 25, 2017, was $1,979,090,627. As of February 20, 2018, there were 33,538,310 shares of the Registrant’s common stock outstanding. Portions of Part III of this annual report are incorporated by reference to the Registrant’s Proxy Statement for the Annual Meeting of Stockholders to be held May 2, 2018. Papa John’s International, Inc., a Delaware corporation (referred to as the “Company”, “Papa John’s” or in the first person notations of “we”, “us” and “our”) operates and franchises pizza delivery and carryout restaurants and, in certain international markets, dine-in and delivery restaurants under the trademark “Papa John’s”. Papa John’s began operations in 1984. At December 31, 2017, there were 5,199 Papa John’s restaurants in operation, consisting of 743 Company-owned and 4,456 franchised restaurants operating domestically in all 50 states and in 44 countries and territories. Our Company-owned restaurants include 246 restaurants operated under five joint venture arrangements and 35 units in Beijing and North China. High-Quality Menu Offerings. Our menu strategy focuses on the quality of our ingredients. Domestic Papa John’s restaurants offer high-quality pizza along with side items, including breadsticks, cheesesticks, chicken poppers and wings, dessert items and canned or bottled beverages. Papa John’s original crust pizza is prepared using fresh dough (never frozen). In addition, during 2016 we introduced a fresh pan dough crust to the domestic system. Papa John’s pizzas are made from a proprietary blend of wheat flour, real cheese made from mozzarella, fresh-packed pizza sauce made from vine-ripened tomatoes (not from concentrate) and a proprietary mix of savory spices, and a choice of high-quality meat and vegetable toppings. Our original and pan dough crust pizza is delivered with a container of our special garlic sauce and a pepperoncini pepper. In addition to our fresh dough pizzas, we offer a par-baked thin crust. Each is served with a pepperoncini pepper. We have a continuing “clean label” initiative to remove unwanted ingredients from our product offerings, such as synthetic colors, artificial flavors and preservatives, announcing in 2016 and 2017 that we had removed an additional fifteen unwanted ingredients across our entire food menu during the two years. We also offer limited-time pizzas on a regular basis and expect to continue to test new product offerings both domestically and internationally. The new products can become a part of the permanent menu if they meet certain internally established guidelines. Commitment to Team Member Training and Development. We are committed to the development and motivation of our team members through training programs, including our leadership development program, incentive and recognition programs and opportunities for advancement. Team member training programs are conducted for Company-owned restaurant team members, and operational training is offered to our franchisees. We offer performance-based financial incentives to corporate team members and restaurant managers. Marketing. Our domestic marketing strategy consists of both national and local components. Our national strategy includes national advertising via television, print, direct mail, digital, mobile marketing and social media channels. Our digital marketing activities have increased significantly over the past several years in response to increasing consumer use of online and mobile web technology. Local advertising programs include television, radio, print, direct mail, store-to-door flyers, digital, mobile marketing and local social media channels. See “Marketing Programs” below, which describes more local marketing programs. Technology. We use technology to deliver a better customer experience, focusing on key strategies that offer benefits to the customer as well as advancing our objectives of higher customer lifetime value, deeper brand affinity and greater sustained advantage over traditional and emerging competitors. Our latest technology initiatives, such as launching a restaurant ordering app on Apple TV in 2016, build on our past milestones, which include the introduction of digital ordering across all our U.S. delivery restaurants in 2001 and the launch of a domestic digital rewards program in 2010. In 2017, over 60% of domestic sales were placed through digital channels. During 2017, we also became the first national pizza brand to integrate with Facebook Instant Ordering, expanded mobile app promotions, launched Papa Track with delivery status, enhanced social sharing and special digital discounts, strengthened alternative payments with the addition of PayPal, and targeted new “Perks” incentives for PAPA REWARDS® loyalty members. its own restaurants and business, we devote significant resources to providing franchisees with assistance in restaurant operations, training, marketing, site selection and restaurant design. Our strategy for global franchise unit growth focuses on our sound unit economics model. We strive to eliminate barriers to expansion in existing international markets, and identify new market opportunities. Our growth strategy varies based on the maturity and penetration of the market and other factors in specific domestic and international markets, with overall unit growth expected to come increasingly from international markets. We are committed to maintaining sound restaurant unit economics. In 2017, the 676 domestic Company-owned restaurants included in the full year’s comparable restaurant base generated average annual unit sales of $1.19 million ($1.17 million on a 52-week basis). Our North American franchise restaurants, which included 2,403 restaurants in the full year’s comparable base for 2017, generated average annual unit sales of $908,000 ($891,000 on a 52-week basis). Average annual unit sales for North American franchise restaurants are lower than those of Company-owned restaurants as a higher percentage of our Company-owned restaurants are located in more heavily penetrated markets. With only a few exceptions, domestic restaurants do not offer dine-in, which reduces our restaurant capital investment. The average cash investment for the seven domestic traditional Company-owned restaurants opened during 2017, exclusive of land, was approximately $354,000 per unit, compared to the $339,000 investment for the 12 domestic traditional units opened in 2016, excluding tenant allowances that we received. Over the past few years, we have experienced an increase in the cost of our new restaurants primarily as a result of building larger units to accommodate increased sales, an increase in the cost of certain equipment as a result of technology enhancements, and increased costs to comply with applicable regulations. “Non-traditional” Papa John’s restaurants generally do not provide delivery service but rather provide walk-up or carryout service to a captive customer group within a designated facility, such as a food court at an airport, university or military base or an event-driven service at facilities such as sports stadiums or entertainment venues. Non-traditional units are designed to fit the unique requirements of the venue and may not offer the full range of menu items available in our traditional restaurants. All of our international restaurants are franchised, except for 35 Company-owned restaurants in Beijing and North China. Generally, our international Papa John’s restaurants are slightly smaller than our domestic restaurants and average between 900 and 1,400 square feet; however, in order to meet certain local customer preferences, some international restaurants have been opened in larger spaces to accommodate both dine-in and restaurant-based delivery service, ranging from 35 to 140 seats. Although most of our domestic Company-owned markets are well-penetrated, our Company-owned growth strategy is to continue to open domestic restaurants in existing markets as appropriate, thereby increasing consumer awareness and enabling us to take advantage of operational and marketing efficiencies. Our experience in developing markets indicates that market penetration through the opening of multiple restaurants in a particular market results in increased average restaurant sales in that market over time. We have co-developed domestic markets with some franchisees or divided markets among franchisees and will continue to utilize market co-development in the future, where appropriate. Of the total 3,441 North American restaurants open as of December 31, 2017, 708 units, or approximately 20%, were Company-owned (including 246 restaurants owned in joint venture arrangements with franchisees in which the Company has a majority ownership position and control). Operating Company-owned restaurants allows us to improve operations, training, marketing and quality standards for the benefit of the entire system. From time to time, we evaluate the purchase or sale of units or markets, which could change the percentage of Company-owned units. Subsequent to December 31, 2017, we entered into an Asset Purchase Agreement to refranchise 31 jointly owned stores in the Denver, Colorado market to an existing franchisee. Of the 1,758 international restaurants open as of December 31, 2017, 35 units or 2.0% were Company-owned (all of which are located in Beijing and North China). We plan to sell the Company-owned China restaurants and the China QC Center in 2018. Accordingly, as of December 31, 2017, the Company’s China operations, including these restaurants and the QC Center, are classified as held for sale in the accompanying consolidated financial statements. Our North American QC Center system currently comprises 11 full-service regional production and distribution centers in the U.S., including a full-service QC Center in Georgia, which opened during 2017, that supply pizza sauce, dough, food products, paper products, smallwares and cleaning supplies twice weekly to each traditional restaurant it serves. Additionally, we have one QC Center in Canada, which produces and distributes fresh dough. This system enables us to monitor and control product quality and consistency, while lowering food and other costs. We evaluate the QC Center system capacity in relation to existing restaurants’ volumes and planned restaurant growth, and facilities are developed or upgraded as operational or economic conditions warrant. We currently own full-service international QC Centers in Milton Keynes, United Kingdom; Mexico City, Mexico; and Beijing, China. Other international QC Centers are licensed to franchisees or non-franchisee third parties and are generally located in the markets where our franchisees have restaurants. those generally available to restaurants in the marketplace. Within our North American QC Center system, products are primarily distributed to restaurants by leased refrigerated trucks operated by us. The restaurant-level and Co-op marketing efforts are supported by media, print, digital and electronic advertising materials that are produced by Papa John’s Marketing Fund, Inc. (“PJMF”). PJMF is an unconsolidated nonstock corporation designed to operate at break-even for the purpose of designing and administering advertising and promotional programs for all participating domestic restaurants. PJMF produces and buys air time for Papa John’s national television commercials, buys digital media such as banner advertising, paid search-engine advertising, mobile marketing, social media advertising and marketing, text messaging, and email. It also engages in other brand-building activities, such as consumer research and public relations activities. Domestic Company-owned and franchised Papa John’s restaurants are required to contribute a certain minimum percentage of sales to PJMF. The contribution rate to PJMF can be set at up to 3% of sales, if approved by the governing board of PJMF, and beyond that level if approved by a supermajority of domestic restaurants. The domestic franchise system approved a new contribution rate of 4.25% effective in the fourth quarter of 2016. The rate will increase an additional 0.25% in annual increments until the rate reaches 5.0% of sales in 2019 and is currently 4.50%. Our proprietary domestic digital ordering platform allows customers to order online, including “plan ahead ordering,” Apple TV ordering and Spanish-language ordering capability. Digital payment platforms include VISA Checkout, PayPal, and Venmo PayShare. We provide enhanced mobile ordering for our customers, including Papa John’s iPhone® and Android® applications. Our Papa Rewards® program is a customer loyalty program designed to increase loyalty and frequency; we offer this program domestically, in the UK, and in several international markets. We receive a percentage-based fee from North American franchisees for online sales, in addition to royalties, to defray development and operating costs associated with our digital ordering platform. We believe continued innovation and investment in the design and functionality of our online and mobile platforms is critical to the success of our brand. We provide both Company-owned and franchised restaurants with pre-approved marketing materials and catalogs for the purchase of promotional items. We also provide direct marketing services to Company-owned and domestic franchised restaurants using customer information gathered by our proprietary point-of-sale technology (see “Company Operations —North America Point-of-Sale Technology”). In addition, we provide database tools, templates and training for operators to facilitate local email marketing and text messaging through our approved tools. text messaging. Local marketing efforts, such as sponsoring or participating in community events, sporting events and school programs, are also used to build customer awareness. North America Point-of-Sale Technology. Our proprietary point-of-sale technology, “FOCUS”, is in place in all North America traditional Papa John’s restaurants. We believe this technology facilitates fast and accurate order-taking and pricing, and allows the restaurant manager to better monitor and control food and labor costs, including food inventory management and order placement from QC Centers. The system allows us to obtain restaurant operating information, providing us with timely access to sales and customer information. The FOCUS system is also integrated with our digital ordering solutions in all North American traditional Papa John’s restaurants. Domestic Hours of Operation. Our domestic restaurants are open seven days a week, typically from 11:00 a.m. to 12:30 a.m. Monday through Thursday, 11:00 a.m. to 1:30 a.m. on Friday and Saturday and 12:00 noon to 11:30 p.m. on Sunday. Carryout hours are generally more limited for late night, for security purposes. General. We continue to attract qualified and experienced franchisees, whom we consider to be a vital part of our system’s continued growth. We believe our relationship with our franchisees is good. As of December 31, 2017, there were 4,456 franchised Papa John’s restaurants operating in all 50 states and 44 countries and territories. During 2017, our franchisees opened an additional 367 (110 North America and 257 internationally) restaurants, which includes the opening of Papa John’s restaurants in two new countries. As of December 31, 2017, we have development agreements with our franchisees for approximately 200 additional North America restaurants, the majority of which are committed to open over the next two to three years, and agreements for approximately 990 additional international franchised restaurants, the majority of which are scheduled to open over the next six years. There can be no assurance that all of these restaurants will be opened or that the development schedules set forth in the development agreements will be achieved. franchise agreement requires the franchisee to pay a royalty fee of 5% of sales, and the majority of our existing franchised restaurants have a 5% royalty rate in effect. Over the past several years, we have offered various development incentive programs for domestic franchisees to accelerate unit openings. Such incentives included the following for 2017 traditional openings: (1) waiver of the standard one-time $25,000 franchise fee if the unit opens on time in accordance with the agreed-upon development schedule, or a reduced fee of $5,000 if the unit opens late; (2) the waiver of some or all of the 5% royalty fee for a period of time; (3) a credit for a portion of the purchase of certain leased equipment; and (4) a credit to be applied toward a future food purchase, under certain circumstances. We believe development incentive programs have accelerated unit openings. We provide assistance to Papa John’s franchisees in selecting sites, developing restaurants and evaluating the physical specifications for typical restaurants. We provide layout and design services and recommendations for subcontractors, signage installers and telephone systems to Papa John’s franchisees. Our franchisees can purchase complete new store equipment packages through an approved third-party supplier. We sell replacement smallwares and related items to our franchisees. Each franchisee is responsible for selecting the location for its restaurants, but must obtain our approval of the restaurant design and location based on traffic accessibility and visibility of the site and targeted demographic factors, including population density, income, age and traffic. In 2018, we plan to offer some or all of these domestic franchise support initiatives, with a particular focus of providing assistance to franchisees in emerging and/or high cost markets. Non-traditional Restaurant Development. We had 256 non-traditional domestic restaurants at December 31, 2017. Non-traditional restaurants generally cover venues or areas not originally targeted for traditional unit development, and our franchised non-traditional restaurants have terms differing from the standard agreements. Franchisee Loans. Selected domestic and international franchisees have borrowed funds from us, principally for the purchase of restaurants from us or other franchisees or for construction and development of new restaurants. Loans made to franchisees can bear interest at fixed or floating rates and in most cases are secured by the fixtures, equipment and signage of the restaurant and/or are guaranteed by the franchise owners. At December 31, 2017, net loans outstanding totaled $19.9 million. See “Note 11” of “Notes to Consolidated Financial Statements” for additional information. Domestic Franchise Training and Support. Our domestic field support structure consists of franchise business directors, each of whom is responsible for serving an average of 165 franchised units. Our franchise business directors maintain open communication with the franchise community, relaying operating and marketing information and new initiatives between franchisees and us. Every franchisee is required to have a principal operator approved by us who satisfactorily completes our required training program. Principal operators for traditional restaurants are required to devote their full business time and efforts to the operation of the franchisee’s traditional restaurants. Each franchised restaurant manager is also required to complete our Company-certified management operations training program. Ongoing compliance with training is monitored by the Global Operations Support and Training team. Multi-unit franchisees are encouraged to appoint training store general managers or hire a full-time training coordinator certified to deliver Company-approved operational training programs. International Franchise Operations Support. We employ or contract with international business directors who are responsible for supporting one or more franchisees. The international business directors usually report to regional vice presidents. Senior management and corporate staff also support the international field teams in many areas, including, but not limited to, food safety, quality assurance, marketing, technology, operations training and financial analysis. Franchise Advisory Council. We have a franchise advisory council that consists of Company and franchisee representatives of domestic restaurants. We also have a franchise advisory council in the United Kingdom and a newly formed Brand Advisory Council consisting of franchisees throughout the world. The various councils and subcommittees hold regular meetings to discuss new product and marketing ideas, operations, growth and other business issues. From time to time, certain domestic franchisees have also formed a separate franchise association for the purpose of communicating and addressing issues, needs and opportunities among its members. We currently communicate with, and receive input from, our franchisees in several forms, including through the various councils, annual operations conferences, system communications, national conference calls, various regional meetings conducted with franchisees throughout the year and ongoing communications from franchise business directors and international business directors in the field. Monthly webcasts are also conducted by the Company to discuss current operational, marketing and other issues affecting the domestic franchisees’ business. We are committed to communicating with our franchisees and receiving input from them. longer periods than Papa John’s and can have higher levels of restaurant penetration and stronger, more developed brand awareness in markets where we compete. According to industry sources, domestic QSR Pizza category sales, which includes dine-in, carry out and delivery, totaled approximately $36 billion in 2017, or an increase of 1% from the prior year. Competition from delivery aggregators and other food delivery concepts continues to increase. With respect to the sale of franchises, we compete with many franchisors of restaurants and other business concepts. There is also active competition for management personnel, drivers and hourly team members, and attractive commercial real estate sites suitable for Papa John’s restaurants. We, along with our franchisees, are subject to various federal, state, local and international laws affecting the operation of our respective businesses, including laws and regulations related to the preparation and sale of food, including food safety and menu labeling. Each Papa John’s restaurant is subject to licensing and regulation by a number of governmental authorities, which include zoning, health, safety, sanitation, building and fire agencies in the state or municipality in which the restaurant is located. Difficulties in obtaining, or the failure to obtain, required licenses or approvals could delay or prevent the opening of a new restaurant in a particular area. Our QC Centers are licensed and subject to regulation by state and local health and fire codes, and the operation of our trucks is subject to federal and state transportation regulations. We are also subject to federal and state environmental regulations. In addition, our domestic operations are subject to various federal and state laws governing such matters as minimum wage requirements, benefits, working conditions, citizenship requirements, and overtime. We are subject to Federal Trade Commission (“FTC”) regulation and various state laws regulating the offer and sale of franchises. The laws of several states also regulate substantive aspects of the franchisor-franchisee relationship. The FTC requires us to furnish to prospective franchisees a franchise disclosure document containing prescribed information. State laws that regulate the franchisor-franchisee relationship presently exist in a significant number of states, and bills have been introduced in Congress from time to time that would provide for federal regulation of the U.S. franchisor-franchisee relationship in certain respects if such bills were enacted. State laws often limit, among other things, the duration and scope of non-competition provisions and the ability of a franchisor to terminate or refuse to renew a franchise. Some foreign countries also have disclosure requirements and other laws regulating franchising and the franchisor-franchisee relationship. National, state and local government regulations or initiatives, including health care legislation, “living wage,” or other current or proposed regulations, and increases in minimum wage rates affect Papa John’s as well as others within the restaurant industry. As we expand internationally, we are also subject to applicable laws in each jurisdiction. We are increasingly subject to laws and regulations that require us to disclose calorie content and other specific content of our food, including fat, trans fat, and salt content. A provision of the Patient Protection and Affordable Care Act of 2010 (ACA) requires us and many restaurant companies to disclose calorie information on restaurant menus. The Food and Drug Administration issued final rules to implement this provision, which require restaurants to post the number of calories for most items on menus or menu boards and to make available certain other nutritional information. The implementation of these regulations was delayed until May 2018. A number of states, counties and cities in which we do business have also enacted menu labeling laws, but these local laws will be superseded by the federal laws once the federal laws go into effect. Government regulation of nutrition disclosure could result in increased costs of compliance and could also impact consumer habits in a way that adversely impacts sales at our restaurants. For further information regarding governmental regulation, see Item 1A. Risk Factors. PIZZA PAPA JOHN’S BETTER INGREDIENTS. BETTER PIZZA. & Design in various foreign countries. From time to time, we are made aware of the use by other persons in certain geographical areas of names and marks that are the same as or substantially similar to our marks. It is our policy to pursue registration of our marks whenever possible and to vigorously oppose any infringement of our marks. We hold copyrights in authored works used in our business, including advertisements, packaging, training, website, and promotional materials. In addition, we have registered and maintain Internet domain names, including “papajohns.com,” and approximately 83 country code domains patterned as papajohns.cc, or a close variation thereof, with “.cc” representing a specific country code. As of December 31, 2017, we employed approximately 22,400 persons, of whom approximately 19,400 were restaurant team members, approximately 900 were restaurant management personnel, approximately 900 were corporate personnel and approximately 1,200 were QC Center and Preferred personnel. Most restaurant team members work part-time and are paid on an hourly basis. None of our team members are covered by a collective bargaining agreement. We consider our team member relations to be good. We are subject to risks that could have a negative effect on our business, financial condition and results of operations. These risks could cause actual operating results to differ from those expressed in certain “forward looking statements” contained in this Form 10-K as well as in other Company communications. Before you invest in our securities, you should carefully consider the following risk factors together with all other information included in this Form 10-K and our other publicly filed documents. Our profitability may suffer as a result of intense competition in our industry. The QSR Pizza industry is mature and highly competitive. Competition is based on price, service, location, food quality, brand recognition and loyalty, product innovation, effectiveness of marketing and promotional activity, use of technology, and the ability to identify and satisfy consumer preferences. We may need to reduce the prices for some of our products to respond to competitive and customer pressures, which may adversely affect our profitability. When commodity and other costs increase, we may be limited in our ability to increase prices. With the significant level of competition and the pace of innovation, we may be required to increase investment spending in several areas, particularly marketing and technology, which can decrease profitability. In addition to competition with our larger and more established competitors, we face competition from new competitors and concepts such as fast casual pizza concepts. We also face competitive pressures from food delivery concepts using new delivery technologies, some of which may have more effective marketing. The emergence or growth of new competitors, in the pizza category or in the food service industry generally, may make it difficult for us to maintain or increase our market share and could negatively impact our sales and our system-wide restaurant operations. We face increasing competition from delivery aggregators, delivering food from quick-service or dine-in restaurants, as well as other home delivery services and grocery stores that offer an increasing variety of prepped or prepared meals in response to consumer demand. As a result, our sales can be directly and negatively impacted by actions of our competitors, the emergence or growth of new competitors, consumer sentiment or other factors outside our control. One of our competitive strengths is our “BETTER INGREDIENTS. BETTER PIZZA.” brand promise. This means we may use ingredients that cost more than the ingredients some of our competitors may use. Because of our investment in higher-quality ingredients and our focus on a “clean label”, we could have lower profit margins than some of our competitors if we are not able to establish or maintain premium pricing for our products. Changes in consumer preferences and trends (for example, changes in consumer perceptions of certain ingredients that could cause consumers to avoid pizza or some of its ingredients in favor of foods that are or are perceived as more healthy, lower-calorie or otherwise based on their ingredients or nutritional content) or preferences for a dining experience such as fast casual pizza concepts, could adversely affect our restaurant business and reduce the effectiveness of our marketing and technology initiatives. Also, our success depends to a significant extent on numerous factors affecting consumer confidence and discretionary consumer income and spending, such as general economic conditions, customer sentiment and the level of employment. Any factors that could cause consumers to spend less on food or shift to lower-priced products could reduce sales or inhibit our ability to maintain or increase pricing, which could materially adversely affect our operating results. Failure to preserve the value and relevance of our brand could have a negative impact on our financial results. Our results depend upon our ability to differentiate our brand and our reputation for quality. Damage to our brand or reputation could negatively impact our business and financial results. Our brand has been highly rated in U.S. surveys, and we strive to build the value of our brand as we develop international markets. The value of our brand and demand for our products could be damaged by any incidents that harm consumer perceptions of the Company. To be successful in the future, we believe we must preserve, enhance and leverage the value of our brand. Consumer perceptions of our brand are affected by a variety of factors, such as the nutritional content and preparation of our food, the quality of the ingredients we use, our business practices and the manner in which we source the commodities we use. Consumer acceptance of our offerings is subject to change for a variety of reasons, and some changes can occur rapidly. Consumer perceptions may also be affected by third parties presenting or promoting adverse commentary or portrayals of our industry, our brand, our suppliers or our franchisees. If we are unsuccessful in managing incidents that erode consumer trust or confidence, particularly if such incidents receive considerable publicity or result in litigation, our brand value and financial results could be negatively impacted. Our inability or failure to recognize, respond to and effectively manage the accelerated impact of social media could adversely impact our business. In recent years, there has been a marked increase in the use of social media platforms, including blogs, chat platforms, social media websites, and other forms of Internet-based communications that allow individuals access to a broad audience of consumers and other persons. The rising popularity of social media and other consumer-oriented technologies has increased the speed and accessibility of information dissemination. The dissemination of information via social media could harm our business, brand, reputation, marketing partners, financial condition, and results of operations, regardless of the information’s accuracy. In addition, we frequently use social media to communicate with consumers and the public in general. Failure to use social media effectively could lead to a decline in brand value and revenue. Other risks associated with the use of social media include improper disclosure of proprietary information, negative comments about our brand, exposure of personally identifiable information, fraud, hoaxes or malicious dissemination of false information. The success of our business depends on the effectiveness of our marketing and promotional plans. We may not be able to effectively execute our national or local marketing plans, particularly if lower sales result in reduced levels of marketing funds. Our marketing strategy utilizes relationships with well-known sporting events, athletes, celebrity personalities and our brand spokesman to market our products. Our business could suffer if we are not able to maintain key marketing relationships and sponsorships, or if we are unable to do so at a reasonable cost, and could require additional investments in alternative marketing strategies. Actions taken by persons or marketing partners who endorse our products, could harm their reputations and could also cause harm to our brand. From time to time, in response to changes in the business environment and the audience share of marketing channels, we expect to reallocate marketing resources across social media and other channels. That reallocation may not be effective or as successful as the marketing and advertising allocations of our competitors, which could negatively impact the amount and timing of our revenues. Our success increasingly relies on the financial success and cooperation of our franchisees, yet we have limited influence over their operations. Our franchisees manage their businesses independently, and therefore are responsible for the day-to-day operation of their restaurants. The revenues we realize from franchised restaurants are largely dependent on the ability of our franchisees to grow their sales. If our franchisees do not experience sales growth, our revenues and margins could be negatively affected as a result. Also, if sales trends worsen for franchisees, especially in emerging markets and/or high cost markets, their financial results may deteriorate, which could result in, among other things, restaurant closures, reduced number of restaurant openings or delayed or reduced payments to us. Our success also increasingly depends on the willingness and ability of our franchisees to remain aligned with us on operating and promotional plans. Franchisees’ ability to contribute to the achievement of our plans is dependent in large part on the availability to them of funding at reasonable interest rates and may be negatively impacted by the financial markets in general or by the creditworthiness of our franchisees. Our operating performance could also be negatively affected if our franchisees experience food safety or other operational problems or project an image inconsistent with our brand and values, particularly if our contractual and other rights and remedies are limited, costly to exercise or subjected to litigation. If franchisees do not successfully operate restaurants in a manner consistent with our required standards, the brand’s image and reputation could be harmed, which in turn could hurt our business and operating results. We rely on a variety of direct marketing techniques, including email, text messages and postal mailings. Any future restrictions in federal, state or foreign laws regarding marketing and solicitation or international data protection laws that govern these activities could adversely affect the continuing effectiveness of email, text messages and postal mailing techniques and could force changes in our marketing strategies. If this occurs, we may need to develop alternative marketing strategies, which may not be as effective and could impact the amount and timing of our revenues. regulatory and competitive conditions and consumer buying habits. A decrease in sales, or increased commodity or operating costs, including, but not limited to, employee compensation and benefits or insurance costs, could slow the rate of new store openings or increase the number of store closings. Our business is susceptible to adverse changes in local, national and global economic conditions, which could make it difficult for us to meet our growth targets. Additionally, we or our franchisees may face challenges securing financing, finding suitable store locations at acceptable terms or securing required domestic or foreign government permits and approvals. If we do not meet our growth targets or the expectations of the market for net restaurant openings or our other strategic objectives, our stock price could decline. Our franchisees remain dependent on the availability of financing to remodel or renovate existing locations, upgrade systems and enhance technology, or construct and open new restaurants. From time to time, the Company may provide financing to certain franchisees and prospective franchisees in order to mitigate store closings, allow new units to open, or complete required upgrades. If we are unable or unwilling to provide such financing, which is a function of, among other things, a franchisee’s credit worthiness, the number of new restaurant openings may be slower or the rate of closures may be higher than expected and our results of operations may be adversely impacted. To the extent we provide financing to franchisees, our results could be negatively impacted by negative performance of these franchisee loans. Domestic restaurants purchase substantially all food and related products from our QC Centers. We are dependent on Leprino Foods Dairy Products Company (“Leprino”) as our sole supplier for cheese, one of our key ingredients. Leprino, one of the major pizza category suppliers of cheese in the United States, currently supplies all of our cheese domestically and substantially all of our cheese internationally. We also depend on a sole source for our supply of certain desserts, which constitutes less than 10% of our domestic Company-owned restaurant sales. While we have no other sole sources of supply for key ingredients or menu items, we do source other key ingredients from a limited number of suppliers. Alternative sources of cheese, desserts, other key ingredients or menu items may not be available on a timely basis or may not be available on terms as favorable to us as under our current arrangements. Our Company-owned and franchised restaurants could also be harmed by a prolonged disruption in the supply of products from or to our QC Centers due to weather, climate change, natural disasters, crop disease, food safety incidents, regulatory compliance, labor dispute or interruption of service by carriers. In particular, adverse weather or crop disease affecting the California tomato crop could disrupt the supply of pizza sauce to our and our franchisees’ restaurants. Insolvency of key suppliers could also cause similar business interruptions and negatively impact our business. Natural disasters, hostilities, social unrest and other catastrophic events may disrupt our operations or supply chain. The occurrence of a natural disaster, hostilities, epidemic, cyber-attack, social unrest, terrorist activity or other catastrophic events may result in the closure of our restaurants (Company-owned or franchised), our corporate office, any of our QC Centers or the facilities of our suppliers, and can adversely affect consumer spending, consumer confidence levels and supply availability and costs, any of which could materially adversely affect our results of operations. Our insurance programs for workers’ compensation, owned and non-owned vehicles, general liability, property and team member health insurance coverage are funded by the Company up to certain retention levels, generally ranging from $100,000 to $1 million. These insurance programs may not be adequate to protect us, and it may be difficult or impossible to obtain additional coverage or maintain current coverage at a reasonable cost. We also have experienced increasing claims volatility and higher related costs for workers’ compensation, owned and non-owned vehicles and health claims. We estimate loss reserves based on historical trends, actuarial assumptions and other data available to us, but we may not be able to accurately estimate reserves. If we experience claims in excess of our projections, our business could be negatively impacted. Our franchisees could be similarly impacted by higher claims experience, hurting both their operating results and/or limiting their ability to maintain adequate insurance coverage at a reasonable cost. Our international operations are also subject to additional risk factors, including import and export controls, compliance with anti-corruption and other foreign laws, difficulties enforcing intellectual property and contract rights in foreign jurisdictions, and the imposition of increased or new tariffs or trade barriers. We intend to continue to expand internationally, which would make the risks related to our international operations more significant over time. Our international results, which are substantially franchised, depend heavily on the operating capabilities and financial strength of our franchisees. Any changes in the ability of our franchisees to run their stores profitably in accordance with our operating procedures, or to effectively sub franchise stores, could result in brand damage, a higher number of restaurant closures and a reduction in the number of new restaurant openings. Our international Company-owned store presence is currently limited to our stores in China, which are classified as held for sale, as we intend to divest those operations in 2018. Sales made by our franchisees in international markets and certain loans we provide to such franchisees are denominated in their local currencies, and fluctuations in the U.S. dollar occur relative to the local currencies. Accordingly, changes in currency exchange rates will cause our revenues, investment income and operating results to fluctuate. We have not historically hedged our exposure to foreign currency fluctuations. Our international revenues and earnings may be adversely impacted as the U.S. dollar rises against foreign currencies because the local currency will translate into fewer U.S. dollars. Additionally, the value of certain assets or loans denominated in local currencies may deteriorate. Other items denominated in U.S. dollars including product imports or loans may also become more expensive, putting pressure on franchisees’ cash flows. With increased indebtedness, we may have reduced availability of cash flow for other purposes. Increases in interest rates would also increase our debt service costs and could materially impact our profitability as well as the profitability of our franchisees. We currently have total indebtedness of $470 million outstanding under our existing credit facility, which accrues interest at variable interest rates. With this higher debt level and anticipated future borrowings, we may have reduced available cash flow to plan for or react to business changes, changes in the industry or any general adverse economic conditions. Under our credit facility, we are exposed to variable interest rates. We have entered into interest rate swaps that fix a portion of our interest rates, but an increase in interest expense, whether because of an increase in market interest rates or an increase in borrowings, would increase the cost of servicing our debt and could materially reduce our profitability. By using a derivative instrument to hedge exposures to changes in interest rates, we also expose ourselves to credit risk. Credit risk is due to the possible failure of the counterparty to perform under the terms of the derivative contract. Higher inflation, and a related increase in costs, including rising interest rates, could also impact our franchisees and their ability to open new restaurants or operate existing restaurants profitably. We operate in an increasingly complex regulatory environment, and the cost of regulatory compliance is increasing. Our failure, or the failure of any of our franchisees, to comply with applicable U.S. and international labor, health care, food, health and safety, consumer protection, anti-bribery and corruption, competition, environmental and other laws may result in civil and criminal liability, damages, fines and penalties. Enforcement of existing laws and regulations, changes in legal requirements, and/or evolving interpretations of existing regulatory requirements may result in increased compliance costs and create other obligations, financial or otherwise, that could adversely affect our business, financial condition or operating results. Increased regulatory scrutiny of food matters and product marketing claims, and increased litigation and enforcement actions may increase compliance and legal costs and create other obligations that could adversely affect our business, financial condition or operating results. Governments may also impose requirements and restrictions that impact our business. For example, some local government agencies have implemented ordinances that restrict the sale of certain food or drink products. Compliance with new or additional domestic and international government laws or regulations, including the European Union General Data Protection Regulation (“GDPR”), which will take effect in May 2018, could increase costs for compliance. These laws and regulations are increasing in complexity and number, change frequently and increasingly conflict among the various countries in which we operate, which has resulted in greater compliance risk and costs. If we fail to comply with these laws or regulations, we could be subject to reputational damage and significant litigation, monetary damages, regulatory enforcement actions or fines in various jurisdictions. For example, a failure to comply with the GDPR could result in fines up to the greater of €20 million or 4% of annual global revenues. Higher labor costs and increased competition for qualified team members increases the cost of doing business and ensuring adequate staffing in our restaurants. Additionally, changes in employment and labor laws, including health care legislation and minimum wage increases, could increase costs for our system-wide operations. Our success depends in part on our and our franchisees’ ability to recruit, motivate and retain a qualified workforce to work in our restaurants in an intensely competitive environment. Increased costs associated with recruiting, motivating and retaining qualified employees to work in Company-owned and franchised restaurants have had a negative impact on our Company-owned restaurant margins and the margins of franchised restaurants. Competition for qualified drivers also continues to increase as more companies enter the delivery space, including third party aggregators. Additionally, economic action, such as boycotts, protests, work stoppages or campaigns by labor organizations, could adversely affect us (including our ability to recruit and retain talent) or our franchisees and suppliers whose performance may have a material impact on our results. Social media may be used to foster negative perceptions of employment in our industry and promote strikes or boycotts. We are also subject to federal, state and foreign laws governing such matters as minimum wage requirements, overtime compensation, benefits, working conditions, citizenship requirements and discrimination and family and medical leave. Labor costs and labor-related benefits are primary components in the cost of operation of our restaurants and QC Centers. Labor shortages, increased employee turnover and health care mandates could increase our system-wide labor costs. A significant number of hourly personnel are paid at rates close to the federal and state minimum wage requirements. Accordingly, the enactment of additional state or local minimum wage increases above federal wage rates or regulations related to exempt employees has increased and could continue to increase labor costs for our domestic system-wide operations. Failure to retain the services of our Founder, John Schnatter, as Chairman and brand spokesman, or to successfully execute succession planning and attract talented team members, could harm our Company and brand. John H. Schnatter, is our Founder and Chairman. We do not maintain key man life insurance on Mr. Schnatter, although we depend on the continued availability of his image and his services as spokesman in our advertising and promotion materials. While we have entered into a license agreement with Mr. Schnatter related to the use of certain intellectual property related to his name, likeness and image, our business and brand may be harmed if Mr. Schnatter’s services were not available to the Company or the reputation of Mr. Schnatter were negatively impacted, including by social media or otherwise. The Company recently appointed Steve Richie to serve as Chief Executive Officer, succeeding Mr. Schnatter in that role. If we are not able to effectively execute this Chief Executive Officer succession and future succession planning, or manage any related organizational change, it could harm our Company and brand. Failure to effectively identify, develop and retain other key personnel, recruit high-quality candidates and ensure smooth management and personnel transitions could also disrupt our business and adversely affect our results. The concentration of stock ownership by our Founder and Chairman allows him to substantially influence the outcome of certain matters requiring stockholder approval. As of December 31, 2017, Mr. Schnatter beneficially owned approximately 29% of our outstanding common stock. As a result, he may be able to substantially influence the strategic direction of the Company and the outcome of matters requiring approval by our stockholders. We rely heavily on information systems, including digital ordering solutions, through which over half of our domestic sales originate. We also rely heavily on point-of-sale processing in our Company-owned and franchised restaurants for data collection and payment systems for the collection of cash, credit and debit card transactions, and other processes and procedures. Our ability to efficiently and effectively manage our business depends on the reliability and capacity of these technology systems. In addition, we anticipate that consumers will continue to have more options to place orders digitally, both domestically and internationally. Our failure to adequately invest in new technology, adapt to technological developments and industry trends, particularly our digital ordering capabilities, could result in a loss of customers and related market share. Notwithstanding adequate investment in new technology, our marketing and technology initiatives may not be successful in improving our comparable sales results. Additionally, we are in an environment where the technology life cycle is short and consumer technology demands are high, which requires continued reinvestment in technology which will increase the cost of doing business and will increase the risk that our technology may not be customer centric or could become obsolete, inefficient or otherwise incompatible with other systems. We rely on our international franchisees to maintain their own point-of-sale and online ordering systems, which are often purchased from third-party vendors, potentially exposing international franchisees to more operational risk, including cyber and data privacy risks and governmental regulation compliance risks. Our critical business and information technology systems could be damaged or interrupted by power loss, various technological failures, user errors, cyber-attacks sabotage or acts of God. In particular, the Company and our franchisees may experience occasional interruptions of our digital ordering solutions, which make online ordering unavailable or slow to respond, negatively impacting sales and the experience of our customers. If our digital ordering solutions do not perform with adequate speed and security, our customers may be less inclined to return to our digital ordering solutions. Part of our technology infrastructure, such as our domestic FOCUS point-of-sale system, is specifically designed for us and our operational systems, which could cause unexpected costs, delays or inefficiencies when infrastructure upgrades are needed or prolonged and widespread technological difficulties occur. Significant portions of our technology infrastructure, particularly in our digital ordering solutions, are provided by third parties, and the performance of these systems is largely beyond our control. Failure of our third-party systems and backup systems to adequately perform, particularly as our online sales grow, could harm our business and the satisfaction of our customers. Such third-party systems could be disrupted either through system failure or if third party vendor patents and contractual agreements do not afford us protection against similar technology. In addition, we may not have or be able to obtain adequate protection or insurance to mitigate the risks of these events or compensate for losses related to these events, which could damage our business and reputation and be expensive and difficult to remedy or repair. We are subject to a number of privacy and data protection laws and regulations. Our business requires the collection and retention of large volumes of internal and customer data, including credit card data and other personally identifiable information of our employees and customers housed in the various information systems we use. Constantly changing information security threats, particularly persistent cyber security threats, pose risks to the security of our systems and networks, and the confidentiality, availability and integrity of our data and the availability and integrity of our critical business functions. As techniques used in cyber-attacks evolve, we may not be able to timely detect threats or anticipate and implement adequate security measures. The integrity and protection of the customer, employee, franchisee and Company data are critical to us. Our information technology systems and databases, and those provided by our third-party vendors, including international vendors, have been and will continue to be subject to computer viruses, malware attacks, unauthorized user attempts, phishing and denial of service and other malicious cyber-attacks. The failure to prevent fraud or security breaches or to adequately invest in data security could harm our business and revenues due to the reputational damage to our brand. Such a breach could also result in litigation, regulatory actions, penalties, and other significant costs to us and have a material adverse effect on our financial results. These costs could be significant and well in excess of our cyber insurance coverage. We are subject to the risk of investigations and litigation from various parties, including vendors, customers, franchisees, state and federal agencies, stockholders and employees. From time to time, we are involved in a number of lawsuits, claims, investigations, and proceedings consisting of intellectual property, employment, consumer, personal injury, commercial and other matters arising in the ordinary course of business. significant amount of judgment, and actual outcomes or losses may materially differ. Regardless of whether any claims against us are valid, or whether we are ultimately held liable, such litigation may be expensive to defend and may divert resources away from our operations and negatively impact earnings. Further, we may not be able to obtain adequate insurance to protect us from these types of litigation matters or extraordinary business losses. We may be subject to harassment or discrimination claims and legal proceedings. Although our Code of Ethics and Business Conduct policies prohibit harassment and discrimination in the workplace, in sexual or in any other form, we have ongoing programs for workplace training and compliance, and we investigate and take disciplinary action with respect to alleged violations, actions by our team members could violate those policies. Franchisees and suppliers are also required to comply with all applicable laws and govern themselves with integrity. Any violations (or perceptions thereof) by our franchisees or suppliers could have a negative impact on consumer perceptions of us and our business and create reputational or other harm to the company. The outcome of the June 2016 referendum in the United Kingdom was a vote for the United Kingdom to cease to be a member of the European Union (known as “Brexit”). This has resulted in a lower historical valuation of the British Pound in comparison to the U.S. Dollar and resulted in significant currency exchange rate fluctuations. While the future impact and other implications of Brexit on our operations in the European Union remain unclear, it has the potential to increase currency volatility, disrupt trade with changes in tariffs and regulations, impede the free movement of goods needed in our operations, and otherwise create global economic uncertainty and negatively impact consumer sentiment. As of December 31, 2017, 29.6% of our total international restaurants are located in countries within the European Union. We operate globally and changes in tax laws could adversely affect our results. in connection with the Tax Act, which may alter interpretations of its provisions and change our preliminary analysis and conclusions. Any Treasury rules, regulations and guidance may materially impact the Company's operating results, including our effective tax rate, related provision for income taxes or amount of deferred tax assets and liabilities, and related valuation allowances. We cannot currently predict the overall impact of the Tax Act on our business and results of operations. There could be unforeseen adverse tax consequences that arise as a result of the Tax Act. In addition, further changes in the tax laws of foreign jurisdictions could arise. These contemplated changes could increase tax uncertainty and may adversely affect our provision for income taxes. As of December 31, 2017, there were 5,199 Papa John’s restaurants system-wide. The following tables provide the locations of our restaurants. We define “North America” as the United States and Canada and “domestic” as the contiguous United States. Note: Company-owned Papa John’s restaurants include restaurants owned by majority-owned subsidiaries. There were 246 such restaurants at December 31, 2017 (31 in Colorado, 60 in Maryland, 32 in Minnesota, 94 in Texas, 26 in Virginia, and 3 in Georgia). Most Papa John’s Company-owned restaurants are located in leased space. The initial term of most domestic restaurant leases is generally five years with most leases providing for one or more options to renew for at least one additional term. Generally, the leases are triple net leases, which require us to pay all or a portion of the cost of insurance, taxes and utilities. In connection with the 2016 sale of our Phoenix market, we also remain contingently liable for payment under 42 lease arrangements. Nine of our 12 North America QC Centers are located in leased space. Our remaining three locations are in buildings we own. Additionally, our corporate headquarters and our printing operations located in Louisville, KY are in buildings owned by us. Our international leases include our Company-owned restaurant sites in Beijing and North China. At December 31, 2017, we also leased and subleased to franchisees in the United Kingdom 316 of the 384 franchised Papa John’s restaurant sites. The initial lease terms on the franchised sites in the United Kingdom are generally 10 to 15 years. The initial lease terms of the franchisee subleases are generally five to ten years. We own a full-service QC Center in the United Kingdom and lease our QC Centers and office space in Beijing, China, and Mexico City, Mexico. Ages are as of January 1, 2018. Steve M. Ritchie was appointed President and Chief Executive Officer effective January 1, 2018. He served as President and Chief Operating Officer from July 2015 to December 31, 2017, after serving as Senior Vice President and Chief Operating Officer since May 2014. Mr. Ritchie has served as a Senior Vice President since May 2013 and in various capacities of increasing responsibility over Global Operations & Global Operations Support and Training since July 2010. Since 2006, he also has served as a franchise owner and operator of multiple units in the Company’s Midwest Division. Lance F. Tucker was appointed Chief Administrative Officer in July 2012 and Chief Financial Officer in February 2011. Mr. Tucker previously held the positions of Treasurer from February 2011 to October 2017, Chief of Staff and Senior Vice President, Strategic Planning from June 2010 to February 2011, after serving as Chief of Staff and Vice President, Strategic Planning since June 2009. Mr. Tucker was previously employed by the Company from 1994 to 1999 working in its finance department. From 2003 to 2009, Mr. Tucker served as Chief Financial Officer of Evergreen Real Estate, a company owned by John Schnatter. Mr. Tucker is a licensed Certified Public Accountant. It was announced on January 16, 2018 that Mr. Tucker is departing the Company effective March 2, 2018. Michael R. Nettles was appointed Senior Vice President, Chief Information and Digital officer in February 2017. Mr. Nettles joined Papa John’s after four years with Panera Bread serving as Vice President, Architecture and Information Technology Strategy. Prior to Panera, Mr. Nettles served as Vice President of Tag Solutions for Goji Food Solutions from April 2011 until July of 2012 and concurrently as Founder and President of Red Chair Ventures, a foodservice technology solutions provider from January 2009 until July of 2012. February 2001, Director of Franchise Development from December 1996 to March 1997 and Construction Manager from November 1995 to December 1996. He has been a franchisee since 1993. Brandon P. Rhoten was appointed Senior Vice President and Chief Marketing Officer in August 2017. Mr. Rhoten joined Papa John’s after six years with The Wendy’s Company, serving as Vice President, Marketing, Head of Advertising, Social Media, Media and Digital Marketing from 2015 through 2017; from 2013 through 2015, serving as Vice President, Head of Digital, Digital Marketing and Social Media; and from 2011 through 2013 serving as Director, Head of Digital Marketing and Social Media. Caroline Miller Oyler was appointed Senior Vice President, General Counsel in May 2014, having served as Senior Vice President, Legal Affairs since November 2012 and previously as Vice President and Senior Counsel since joining the Company’s legal department in 1999. She also served as interim head of Human Resources from December 2008 to September 2009. Prior to joining Papa John’s, Ms. Oyler practiced law with the firm Wyatt, Tarrant and Combs LLP. On February 9, 2018, Steven R. Coke, 39, the Company’s Vice President of Investor Relations and Strategy, was appointed to the positions of principal financial and accounting officer of the Company on an interim basis, effective March 2, 2018, the previously announced date of departure of Lance Tucker, the Company’s Chief Financial Officer and Chief Administrative Officer. Mr. Coke has served as Vice President, Strategic Planning since January 2015, after serving as Senior Director, Strategy since April 2012 and Senior Director, Restaurant Finance since June 2011. He has served in various director and manager level positions with increasing responsibility in Finance since joining the company in May 1998. Mr. Coke is a licensed Certified Public Accountant. Our Board of Directors declared a quarterly dividend of $0.225 per share on January 31, 2018, that was payable on February 23, 2018, to shareholders of record at the close of business on February 12, 2018. Our Board of Directors has authorized the repurchase of up to $2.075 billion of common stock under a share repurchase program that began December 9, 1999, and expires February 27, 2019. In fiscal 2017, a total of 3.0 million shares with an aggregate cost of $209.6 million and an average price of $70.80 per share were repurchased under this program. Subsequent to year-end, we acquired an additional 546,000 shares at an aggregate cost of $32.7 million. Approximately $395.0 million remained available under the Company’s share repurchase program as of February 20, 2018. The following performance graph compares the cumulative shareholder return of the Company’s common stock for the five-year period between December 30, 2012 and December 31, 2017 to (i) the NASDAQ Stock Market (U.S.) Index and (ii) a group of the Company’s peers consisting of U.S. companies listed on NASDAQ with standard industry classification (SIC) codes 5800-5899 (eating and drinking places). Management believes the companies included in this peer group appropriately reflect the scope of the Company’s operations and match the competitive market in which the Company operates. The graph assumes the value of the investments in the Company’s common stock and in each index was $100 on December 30, 2012, and that all dividends were reinvested. The selected financial data presented for each of the fiscal years in the five-year period ended December 31, 2017, were derived from our audited consolidated financial statements. The selected financial data below should be read in conjunction with “Management’s Discussion and Analysis of Financial Condition and Results of Operations” and the “Consolidated Financial Statements” and Notes thereto included in Item 7 and Item 8, respectively, of this Form 10-K. We operate on a 52-53 week fiscal year ending on the last Sunday of December of each year. The 2017 fiscal year consisted of 53 weeks and all other years above consisted of 52 weeks. The additional week resulted in additional revenues of approximately $30.9 million and additional income before income taxes of approximately $5.9 million, or $0.11 per diluted share for 2017. North America franchise royalties were derived from franchised restaurant sales of $2.30 billion in 2017 ($2.25 billion on a 52 week basis), $2.20 billion in 2016, $2.13 billion in 2015, $2.04 billion in 2014 and $1.91 billion in 2013. Includes international royalties and fees, restaurant sales for international Company-owned restaurants, and international commissary revenues. International royalties were derived from franchised restaurant sales of $761.3 million in 2017 ($744.0 million on a 52 week basis), $648.9 million in 2016, $592.7 million in 2015, $553.0 million in 2014 and $460.0 million in 2013. Restaurant sales for international Company-owned restaurants were $13.7 million in 2017 ($13.4 million on a 52 week basis), $14.5 million in 2016, $19.3 million in 2015, $23.7 million in 2014 and $22.7 million in 2013.
2019-04-26T05:47:03Z
https://ir.papajohns.com/node/21671/html
Part 3 of George, his camel, the Victoria cross and 11 more bravery awards. A week ago today I started with the first of four blogs on a fellow I am calling George who came into this world in a log cabin in a place that years later would become Dauphin Manitoba. The boy grew up on a farm and learned to shoot as a kid and got so good at it that he would not only be winning all the local shooting contests, but he would also be bringing home the food for the workers employed at his father's sawmill. In his teens George would become involved with the boy scouts, then the Manitoba Horse, a militia unit of the day and then he'd join the regular army and served in the trenches of France with the First Manitoba Rifles. The 2nd blog told of his crawling out of the trenches after about 8 months and switching into the air force by enlisting with the Royal Flying Corps. By the Fall of 1917, and then with about three years service under his belt George would have been promoted to the commissioned rank of a lieutenant and moved up from a machine gunner and observer to that of pilot. By the end of the 2nd blog he had already shot down over a half dozen enemy planes and became an air ace in the process. For these he would be awarded the first four of 12 medals he would eventually get for bravery. These were not one, but 2 Mentions in Dispatches, and also 2 Military Crosses for bravery while flying combat missions over France. With the capture of some 80,000 Italian's near their front and the country's near collapse, the British ordered four squadrons to leave France and do their best to aid the Italians. George would travel with one of the squadrons and would actually command most of the flyers sent. It did not take long for the Brits to realize that the Germans and Austrians commanded the air space, having already had so much success over the Italians. Thus the job for the Brits was to set the tone and let the enemy know that they meant business. Very soon after arrival George and three other Camels were ambushed by 12 of the enemy, but the Camels took to battle right away and sent a message to the enemy... they had no intention of turning and running. In fact in one of the earliest dog-fights 12 German planes attacked the 4 Camels... and all were driven off with exception of the one George forced into the ground, having already shot of one of its wings. On another flight George and his squadron came across two large observation balloons tied off quite close to the ground in a large field. A long line of tucks carrying enemy supplies was also in the area. George immediately dropped out of the sky and flew very low and took out both balloons and set the trucks scrambling to safety. Later when he got back to base he was given a lecture from his boss who noted the instructions of the day did not permit such low flying. George simply responded that upon seeing the obvious targets he just forget the order. Not long after this they came across five balloons and set them all afire and also shot up a German staff car which flipped over under the Camel's intense cannon fire. The next morning about 40 enemy planes woke the Brits to an air attack but it was a sloppy job as most apparently were still under the influence of too much partying the night before. The Brits raced to their planes and after many dogfights drove about a dozen enemy planes to the ground. But en-route back to their base the Brits came under attack by about a dozen of the most formidable German craft... the Gotha bombers. But the Brits managed to drive two out of the air. Again the men were not awarded for the credits they deserved because the enemy attack was due to their disobeying commands earlier and conducting unauthorized raids. On Jan 1 1918 George was on another mission escorting some Brit bombers, but doing so from off in the distance. The enemy not seeing him when they swooped into for some kills. George then jumped the enemy and blew one from the skies. He would be awarded his first Distinguished Service Order for this action. The DSO is just one medal down from the Victoria Cross. Days later he would be shooting down 2 more enemy craft. Because of his flying skills, the British command would often turn to George to do the most difficult jobs. One of these being the dropping of spies into certain target areas. He was so successful with numerous missions of this sort that the King of Italy awarded him first a Silver Medal for Valor, and later even a 2nd one. This medal is the highest medal Italy can award a non Italian for war time bravery. In April, and May of 1918 George would shoot down another 9 enemy planes. A second bar to his Military Cross would soon be presented. And soon the French government presented its Croix de Guerre for his support role in defending their bomber missions into as deep as five miles into those enemy lines. In May one of his victims was an Austrian air ace. By June of 1918 the Austrians had lost about 150 of their 200 aircraft. So they were little in the mood for George's invite to come and do battle. He had invited the enemy in the past with similar challenges but this time told the enemy his boys would be doing bombing runs for the next 2 weeks and gave the times and places and invited the enemy to come up and say hello. They never showed up. And so the Brits did just what they said they would do. They dropped their bombs with impunity. In July a new squadron was formed and George, now with the rank of Major was given its command. The following day 3 of his planes ran into 5 of the enemy and drove two from the sky. Days later they would drive five to the ground. Another 24 hrs. would pass and George's team ran into the enemy. This time they decided they wanted nothing to do with the Brits and sped off. Within a few more days George would down 3 more planes. It was at about this time that he would get yet another Mentions in Dispatch. Now with an acknowelged record of 33 planes and 9 balloons, George was awarded a bar to his Distinguished Service Order. The equivalent of earning two of the very medal just one down from the VC. The numbers were really much higher, but credits were not given for those he shot down before becoming an officer and those shot down on unauthorized raids. In August of 1918 George flew the Prince of Wales on a flight over the enemy territory so that the Prince could see the battleground for himself. George's next assignment involved relocating back to England and commanding the fighter pilot training, but I'll bring George's blogs to a close with that story next Wednesday. The buzz in US newsrooms across the country yesterday was the White House East Room ceremony to award 24 Medals of Honor, the most in one gathering since the days of WW11. A previous blog alerted you to the event to take place on the 18th, and yesterday families and dignitaries and a few current Medal of Honor heroes attended the historic occasion. US President Barrack Obama's comments quickly turned to the wrongs of the past and noted that..." No nation is perfect, but here in America, we confront our imperfections." He added that..." Some of these fellows fought and died for a country that did not always see them as equals." About a dozen years ago the US Congress ordered a review of over 6,500 cases where a soldier was awarded a Distinguished Service Cross. Awards covered WW11, Korean and Vietnam service and concentrated on the Army, and those veterans that were either Hispanic or Jewish. The review was later extended to Black Americans. And the review did not come easily. A fellow named Mitchell Libman can take the credit for starting this. Some 50 years ago he wondered why his buddy only got a Distinguished Service Cross, one down from the MOH when feelings amongst many was that he ought to have been awarded the Medal of Honor, the highest of all military awards. He began a 50 year fight to see justice being done. The President himself congratulated Libman on what he actually accomplished with this decades long battle to set things right. Twenty four men were found in the review to have been awarded the lower medal when indeed they were deserving of the MOH. Eight from Vietnam, nine from Korea and seven from WW11. One of these is still listed as missing, ten others never came home. Yesterday only three of the 24 were still alive and were presented their medals in person by Obama in Washington. All but the three received their awards posthumously, and yesterday's medals presented to relatives at the White House. Sgt First Class Melvin Morris on the left, at center is Master Sergeant Jose Rodela, and Spc Santiago Erevia are proudly wearing their Medals of Honor. All earned them for actions in 1969 in Vietnam. Their very stern looks might be saying lots about the 50 years of discrimination they, the remaining 21 and possible thousands of other experienced during those horrible war years, and perhaps since. Moving along, but still with corrections.. back on July 7 last year I brought you a story of Lt. Colonel James Forbes-Roberstson, a Scottish born officer who was placed in command for a short period with the Royal Newfoundland Regiment back during WW1. The story about the regiment happened in 1917 but I had stated that it was during the Battle of the Somme in 1916. While the unit definitely fought most heroically there, and was almost whipped out, the story in the blog was in fact about another battle the next year. I ended the blog by also mixing up the Colonel's dates of birth and death. His correct date of death was 5 August 1955 and his birthday was 7 July 1884. I thank that reader for holding my feet to the fire and am sorry for these errors. Again moving along, watch out for the news coverage next week on March 25th. That day is National Medal of Honor Day in the US, and ought also be recognized in Canada but doubtfully you will see any news coverage in Canada about it. Back in 1863, that was the day when six of the survivors from the Andrews Raiders were in Washington DC and met with the Secretary of War and presented with the first ever Medals of Honor. They were later taken down the road to the offices of the White House and met President Abraham Lincoln. These medals of course were the first presented but not the first earned. Later that year and for years to come others would be awarded the medal for actions that actually dated before the events involved in the Andrews Raid. And finally a note of thanks to Mark Sumner who has taken a great interest in my blogs and come to my assistance with information about Newfoundland born John Hayes. Much has been written in past blogs about Hayes and the famous battle of 1864 between the USS Kearsarge and the CSS Alabama off the coast of Cherbourg France. Hayes and another fellow thought for years( in error) to be a Canadian, Joachim Pease, were among several from that battle to be awarded the Medal of Honor. Mark has sent me many tidbits on Hayes and has recently actually visited the grave in Iowa. He has also taken a Medal of Honor flag to the cemetery and posted it at the grave sight and noted that in a few months there will be a commemoration ceremony at this grave. Stay tuned for more on that, but in the mean time here are three of several photos he has sent along regarding this Canadian hero of Civil War days. Back on Friday with the last on Victoria Cross recipient George. Part two of George's bravery, his riding the camel and earning a Victoria Cross. George's earning the Victoria Cross had to be the highlight of his military career. But it ought never be forgotten that on the road to receiving the VC he also earned another 11 medals for bravery in the face of the enemy. The last column left off with him being awarded the first of the dozen for actions in France were he forced one plane down, set another crashing to earth in flames and in the process did what he was originally sent off to do...gain valuable intelligence on enemy movements in the area...and doing so by flying very low and constantly in incredible danger throughout. This was in July 1916. And he had just returned a few months earlier from England were he took further training and was commissioned a 2nd Lieutenant. Two months later, during the Battle of Cambrai, George was air bound again taking photos of the German defences and also the newly developed German tanks. He as usual had to fly very low to get good intelligence on not only movement and placement of weaponry but also to find evidence of new movement and defenses. While doing so he and his pilot came under attack by two German aircraft but both were driven off. Then they found themselves in the midst of 4 enemy planes but these too were driven off. About mid November 1916 the British forces finally overtook the village of Beaumont Hamel and the RFC were given instructions to keep an eye of the area for any enemy movements that could result in the village falling to the Germans. It was George and his pilot that came across a massive assembly of about 4,000 enemy not far away obviously gathered to attempt to retake the village. He called in the co-ordinates and directed artillery fire that when completed, saw the massive destruction of the enemy and saved the village from falling again. Two months later George was awarded the Military Cross for these actions. Days after his bravery during the Beaumont Hamel incident George's desires to become a commissioned pilot instead of an observer/gunner and co-pilot were supported and he was sent off to England for training. This formally began in January of 1917. At that time it must be recognized that the training was very limited. You were shown how to get the plane up and down, a little about map work and how to load the Lewis machineguns while air born and little else. The incredible demand for pilots did not give a lot of time for training. And none was given on how to pull out of spiral dives, dogfighting or anything other than the bare bones basics. It was no wonder that in those days the life expectancy of a pilot was only 11 days. But George still took to the basics so fast that he was taking his first solo flight after only 55 minutes of formal training. Soon graduating and commissioned as a Captain, he would be back in the air over France. And as soon he would be battling in the Arras Offensive were he not only shot down yet another plane but discovered an enemy trench loaded with about 1,000 Germans. He directed the very successful shelling of the trench and two very powerful long range guns. On 18 July he was awarded another MID and another Military Cross for these actions. It would result in the first of two bars he would eventually have on his original MC. Said another way, he would have three Military Cross awards. He was also promoted to Flight Commander. The oak leaves on this pin represent the medal called the MID or Mentions in Dispatch. The Military Cross is just two medals in ranking below the Victoria Cross. After numerous other flights and several minor wounds George was sent back to England for a short rest. There his talents were put to good use in the training of the never ending line up of new recruit pilots. And it would be here that he would be assigned to a Sopwith Camel. A two winged one, not a four legged one. Here is a picture of George and his Camel. It is a one seat biplane named after the British founder Thomas Sopwith. His Sopwith Aviation Company unveiled the plane in 1917 and armed it with two synchronized Vickers machineguns. The plane was light but well armoured and was very easy to handle by the experienced pilot and could bank to the right in less than half the time of enemy planes. By war's end it would prove to be the most successful model of plane and to its credit brought down almost 1,300 enemy planes. It would be George's favorite plane and under his command would bring down almost 50 enemy aircraft. George continued training the recruits but had heard of development in the airplanes of the enemy back and France and continually pushing his bosses to be reassigned back on the Western Front. Finally after he chose to buzz the training headquarters at a very low height they decided his talents were better used buzzing the enemy instead of the HQ building and sent him packing again back to France and to a squadron of flying Camels to boot. In late October 1917 George was leading a squad of Camels in France when they came under attack by the new German very powerful Albatros warplanes. The Brits were attacking a long line of soldiers in a blinding rainstorm. Out of nowhere came the German planes. Two Camels were forced to the ground and George's plane was racked from one end to the other and he was fighting for his life. He went into an immediate tight turn to the right and just missed clearing the tops of some trees and suddenly pulled into a loop that saw him finally pull out when just a few feet off the ground. His burst of shells into a pursuing enemy saw that plane burst into flames. Then another came at him, and again he went into a sudden loop that caught the enemy by surprise. Pulling out of the loop he then shot that plane out of the sky. Two days later he took out yet a third. He was now a war ace with five or more kills to his credit. George was well on his way to becoming a top ace when plans called for his moving out of the Western Front and into another theatre of war. But on Wednesday I return to cover a few other matters. Like so many other boys, George got some of his initial upbringing not only by the family and relatives but also the boy scouts, were the youth were taught very early of their duties to God, others and self. There he would also learn that the scout mission was to develop well grounded youth, and thus enabling them to prepare very early in life for personal success and worldly contributions. And George learned these principals well. Seven Victoria Cross recipients, starting out as scouts, came from Canada. Google the names of RE Cruikshank, GB McKean, JW Foote, C Hoey, WG Barker, C Merritt and JC Richardson, to learn more about these heroes. Some of these men had their stories told in earlier blogs. George was born in a log cabin, the first of nine siblings, in area that was so small it did not even become a village till eight years after he was born. It's population at the time was about 1,000, now about 8,000 and known as Dauphin and near the SW corner of the province of Manitoba. The family moved about 150 Km SW to Russell Manitoba in about 1902 and operated a farm and a sawmill till about 1913 when they returned to Dauphin. George would often be pulled from school for weeks on end as a teen to help cut the logs. If not at school or working on the farm or at the mill, George was out riding his horses and shooting his lever action rifle. He got so good at shooting, especially while on horseback that just about anytime he entered the local turkey shoots he'd win the prize. Often beating out the adults competing. He'd as often be spending his allowance on ammunition and at one point even designed his own peep sight for the rifle. His very sharpshooting skills were also put to good use as the hunter for much of the food the workers at the family sawmill ate each day. Years later a biographer would state that his shooting skills were so good that he could have been a trick shooter at a circus. It was here that George also participated in the scout movement. In 1914 George signed up with the local militia by joining with the Cavalry unit called the 32nd Manitoba Horse. By late December he decided to leave his grade 11 class at high school and signed up with the regular army by joining the First Mounted Rifles as a Trooper. Both cap badges are shown above. In June of 1915 George went with his unit to England and soon qualified as a Machine Gunner on the Lewis MG's. By September he was in France and spent the next 8 months fighting in the trenches with his machinegun. George soon came to realize that in trench warfare, there was not a lot of use for the customary cavalryman's horse. And the trench was wet, full of mud and the odd rat. In fact many odd rats! Apparently some were the size of cats! The going was slow and very dangerous having to deal with very heavy enemy fire, barbed wire, artillery shells, the gas attacks and fear of either freezing or drowning in those very pits. Looking up into the sky George tended to wander off in thoughts of being in one of those planes involved in the dogfights. He could even recall seeing the stunt flyers back in Manitoba at many of the county fairs he attended during his turkey shooting days. And the prevailing thoughts were how to get out of the trenches and into those dogfights. His chance came when he heard that Britain's Royal Flying Corps were always looking for good men. Men who were good shots and men who had abilities on a horse... and thus an excellent sense of balance ..be they upright or not. He obviously fit all the bills and applied and was as quickly rejected. But he remembered his boy scout training about pushing for success and tried again and this time got accepted... and given the rank of a corporal to boot... but not as a flyer but as a mechanic... It was a start! George would also be trained as an observer of the land, the movement of troops, maps and compass etc and methods of communicating this information to friendly forces below. Soon he would receive the observer's qualifying breast flash shown to the right of his new RFC shoulder flash. This training would occur in England and soon after he would be back in France again and posted to an RFC operational squadron there. George was assigned as an observer and machine gunner on a Royal Aircraft Factory built light biplane bomber known as the BE 2c, a model is shown on the left. By mid July of 1916 he would be in the 2nd cockpit from the front and his sharpshooting skills from back home were put to the test. A test he passed with the driving down of an enemy LFG Roland.(model shown on right and above) The following month George's accuracy sent another Roland earthbound in flames. For these actions he was awarded an MID, a Mentions in Dispatch, which is in itself a bravery award. He would get two more of these before the war was out. And George would get many more bravery awards before war's end. In fact so many that he would become Canada's most highly decorated soldier, sailor or airman in our History. So too for the entire British Empire and also the Commonwealth of Nations. But more on this on Monday. Much more! Join me and hear all about the Camel. There are a dozen war heroes that Canadian's need to reflect on just in the first half of March alone. Five earned the Medal of Honor and 7 earned the Victoria Cross... all with March connections. Over the past year or slightly more, I have brought you well over 200 blogs about the Medal of Honor, the Victoria Cross and matters connected to both. The five Medal of Honor men mentioned in the title above were Wilcox, Byrd, Miller, Phillips and Stoddard. I hope their names ring a bell. History has yet to produce a photo of James Stoddard. But it sure has produced lots of pictures of the USS Stoddard that was built in 1943 and named in this Canadian hero's honour. This photo was taken near Bremerton Washington State in 1944. The ship's patch is also shown. In 1993 The Queen of Canada introduced the new Canadian Victoria Cross, awarded on the same principles as the British Commonwealth's VC, shown on the left. There are slight changes to the new Canadian version on the right. No new Canadian versions have yet been awarded. There are six other Canadian Victoria Cross recipients with connections to the month of March. None of their stories have appeared in these blogs yet but will hopefully be done in the months to come. The first two of these are also birthday boys, both on 5 March. Robert Spall served with the Canadian Forces (PPCLI) but was born at Ealing, Essex County, England. His bravery near Parvillers France on 13 August 1918 resulted in being awarded the VC. He was born in 1890. Fellow birthday boy William Hew Clark-Kennedy was born on the same day but in 1879, and while also serving with the Canadians, he was actually born at Dunskey, Kirkcudbrightshire Scotland. His VC was earned for actions on 27/28 August 1918 at the Fresnes-Rouvroy Line also in France. Thomas Fasti Dinesen died on 10 March 1979. He was born at Rungsted Denmark and served with the famous Black Watch. His bravery was recognized with a VC for actions in France at Parvillers on 27 October 1918, just about two months after Robert Spall, mentioned above. On 12 March 1930 William George Baker of Dauphin Manitoba passed away. His bravery for actions of 27 October 1918 at Foret de Mormal France earned him the Victoria Cross. On March 13th in 1978 Canada lost yet another hero when Milton Fowler Gregg passed away. Born at Mountain Dale, Kings County NB, his actions at Canal Du Nord again in France 28/30 Sept. 1918 resulted in his being awarded the VC. And on 15 March 1890 Jean Baptiste Arthur Brillant was born at Assemetquagan, Routhierville Quebec. He was awarded the VC for his actions during the Battle of Amiens on 8/9 August of 1918 and died the following day. On Friday I will bring you a story of one of these heroes. It will be not be one you can afford to miss. The story left Andy and a crew of 7 others on a bombing run over German occupied Northern France. Their target was a railway marshalling yard at Cambrai and to avoid high civilian casualties they had to fly very low. Just a few thousand feet off the ground, far lower than their usual 25,000 ft. bombing runs. But, as mentioned on Wednesday, being so low they got caught in blinding enemy search lights and had to quickly dive and then pull out of the dive to get out of the light. But no sooner had they escape that trap, they entered into another when a German night bomber came out of nowhere and started emptying it's cannon fire into the belly of the Canadian plane from below. The devastating blasts knocked out two port engines, much of the plane's hydraulics and set it ablaze in several areas. The pilot immediately realized that the plane was going down within minutes and nothing could save it. He gave the order for all to immediately bail out. When the pilot thought the entire crew had escaped, he parachuted out himself. But only five had gotten out... by way of the front escape hatch. Andy was still on board. And so was his Thunder Bay Ontario buddy, pilot Officer Pat Brophy, who had an even more serious problem to deal with. The plane's rear turret is very small, and there was no room in it for a parachute which had to be stored in the belly of the plane, but close bye. When Brophy turned the turret to deal with the attacking enemy plane, the turret revolved partially along its track, but in doing so it passed by the escape door from the turret back into the belly of the plane. And since the hydraulics were now out, the only way out was to crank the turret back to the position of the door..and to do this by hand. Brophy moved it a little, but then the crank broke...and when that happened he quickly came to realize it was the end for him. There was no escape. Period! As noted on Wednesday, this is not Andy's plane but another, made in 1945 and later repainted with his crew's markings on it. Andy worked the guns on the top and Pat the guns at the rear end of the plane. When Andy got out of his own turret and down to the escape hatch between him and Pat, he looked back through the flames inside the cabin and could see through a plexiglass window that Andy was still inside his turret and struggling to get the door open. Andy fell to his hands and knees and crawled through the flames of hydraulic oil as he made his way to the back. Grabbing a fire axe on the way he finally got to the rear but his own clothing AND parachute were on fire, but nevertheless he hackled away at the door but could not get it open. He then dropped the axe and tried with his bare hands to get it open but could not get it too budge. His buddy knew that it was useless and ordered Andy to leave him and at least he could escape and live to fight another battle. Andy ignored him and kept trying, but finally had to give it up and crawled back... still on fire and through fire... and made it to the escape hatch. Andy then stood up...still on fire, came to attention and faced Pat, he saluted his buddy and senior Pilot Officer and said something. It was probably the very words he uttered a hundred times in the past as each went off to bed. It was probably... "Good night Sir." Andy then plummeted to earth. His shute was so badly burned that it did not open. French farmers near bye witnessed the ball of fire falling earthbound and raced to the scene to find Andy, amazingly still alive.. and still of fire. After the flames were put out they rushed him to a local doctor, his burns overcame him and Andy died not from the fall, but from the flames. He was buried near bye and later removed to the Commonwealth War Graves Commission's Meharicourt Communal Cemetery about 30 km from Amiens France. He now rests with 40 other airmen, 12 from Canada, 21 from the RAF, 6 from the Royal Australian AF and 2 from the Royal New Zealand AF. When the plan crashed if first struck a tree and a wing was torn off. Then the plane broke into many pieces. With it bouncing about, the rear turret was broken away from the plane. Within seconds Pat Brophy miraculous came too. He had been blown from the turret and smashed into a tree, and knocked out for a few seconds. When he awoke, he took off his helmet and his four leaf clover fell out. The very clover that was given to him for good luck from his buddy... Andy... who was now dead. Other than a few scratches and being badly knocked about, Pat was others wise quite fit for duty. Pat looked at his watch it was 13 minutes past Midnight. And for the triskaidekaphobia readers yes, it was Friday the thirteenth, in June of 1944. And to boot it was the crew's 13th mission. Pat made it too a local village and heard troops coming so hid in a doorway. From behind two men approached him and muffled him and took him away. They were resistance men and Pat joined up with them and continued to fight till finally repatriated back to England. He then met others from his crew and for the first time told the others what his fellow gunner Andy tried to do to save his life. It was only then that he learned that Andy had died. From the above you can now see that Andy had a last name. His full name of course being Andrew Charles Mynarski. Two months after the above announcement of the Victoria Cross being awarded it was presented to Andy's mother in Alberta. It was presented by the then serving Lt. Governor, the Honorable J.A. McWilliams. It should be mentioned that days after the D Day landings it was learned that the efforts of the air forces involved definitely had a bearing on the landings and the days following. The Panzer divisions arrived without their tanks. And it was said that they could not get through due to so many traffic jams and ties ups because of all the damage the bombers did. Pilot Officer Andy Mynarski, VC is shown here probably about a year before he was killed in action. At the time of the photograph he was a Warrant Officer. He was promoted the day before being killed in action, to that rank of a Pilot Officer. Shown also are the wings of an air gunner and thus the initials... A and G. There are numerous memorials to this officer's heroism in Canada. At Winnipeg the First Canadian Air Division's HQ there is a Mynarski memorial Room and here they proudly display this hero's Victoria Cross. At CFB Cold Lake the very axe he used, and recovered from the crash site, is on display. There is a highschool in Winnipeg, a park in Alberta, A Royal Canadian Legion and an air cadet Squadron named in his honor. A three lake chain in Alberta is named for the hero. CFB Penhold has an officer's quarters so named and the officer has been inducted into the Canadian Aviation Hall of Fame. The Andrew Mynarski VC Plaque above was unveiled at Winnipeg's Kildonan Park in 2005. The middle statute, over 8 feet high was unveiled in 2005 at the Durham Tees Valley International Airport which is located on the very land that Andy once served and then known as MIddleton St George, in England. Air passengers pass right past it enroute to wherever they are off to. And among others is the bust outside of our own parliament Buildings in Ottawa known as the Valiants Memorial which consists of 9 busts and five statutes. The above pictured plane will be travelling to England to join up with the only other Lancaster that still flies and will spend a month touring. It is said to be one of Canada's most famous symbols of the war and readers ought to watch the news in August to catch get the story. In the mean time much can be learned about Andy by doing a Google search of his name. Couldn't have been a triskaidekaphobiac, so gave away 4 Leaf Clover. Bravery cost his life. Awarded Victoria Cross posthumously! When we think of heroes from Winnipeg Manitoba we usually recall the names of Leo Clarke, Fred Hall and Robert Shankland who all lived at some point on the Pine Street...and even in the same block. Each served in WW1 and the incredible bravery of all three was later rewarded by each earning the Victoria Cross, the British Empire's very highest of awards for bravery in the face of the enemy. Folks were so proud of these men that in 1925 Pine Street was renamed in their honour and is now known world wide as Valour Road. Well folks, not all of the heroes in Winnipeg came from the army. Let me tell you about Andy. Andy was one of six siblings born to Polish Immigrants. He attended two primary schools in that city and by the age of 16 he was probably in high school when his father passed away. Like most boys Andy took on odd jobs after school hours to help make ends meet. He'd find work cutting chamois (the famous cloth used in car washing) and in carpentry. He was a great woodworker and loved even making furniture. Also like so many young men Andy joined up with the local militia in 1940 and served briefly with the Royal Winnipeg Rifles. This is their cap badge. In 1941 his thoughts of soaring through the clouds saw him joining the RCAF. Basic training at Edmonton followed by wireless courses at Calgary, gunnery school at MacDonald Manitoba and by Christmas of 1941 Private Andy was graduating from air gunnery school at Halifax and wearing the rank of a Temporary Sergeant. By December of the following year Andy had shipped oversees and joined up with the 419th Moose Squadron, so called after the squadron's first commander. The men would become known then and to this day as Moosemen. He was assigned the job of mid upper gunner which meant that his place of work was inside a small turret on top of the plane and about mid way along its length. Temporary Sergeant Andy, shown here, is wearing the badge of an air gunner on his left chest. He would take initial training and part in various sorties on several types of planes operating out of Middleton St. George, Yorkshire England. Some of these include the Vickers Wellington, Hadley Page Halifax, Avro Lancaster and finally the Avro Lancaster MK X bombers. These later bombers were built in Ontario by Victory Aircraft Ltd, later becoming A.V. Roe Canada Ltd and still later the company Avro Canada, who, by war's end had made 430 planes for the war effort. By June of 1944 Andy would find himself in the thick of battle whilst carrying out the duties laid out in General Eisenhower's plans to create a massive disruption to the transportation system of the Western Europe. This plan called for the US Air Force and the British Air Force (read Canada as well) in bombing highways and rail lines and any routes the Germans could take to bring forces anywhere near the Normandy beaches were the massive landings were planned to take place. On 12 June 1944 all of the gunners were promoted to the rank of Pilot Officers. At about 10 p.m. that night Andy and the rest of the 7 man crew of his plane (6 were Canadians) were given their orders to board the plane for a mission. Just before boarding Andy looked down at the ground near his plane and found a four leaf clover, the world renown sign of good fortunes. He picked it up and before boarding he gave it to his close friend and crew member, Pat Bromphy from Port Arthur, now Thunder Bay Ont. For some time Pat had held the rank of Pilot Officer and so was senior to Andy, but ranks aside both were very close friends and tended to hang out together and off from the rest of the crew. Both were gunners and so that strengthened their friendship. The MK X was designed for flights at the 25,000 ft. range... not the 2,000 ft. range above ground. But it was at this drastically lower altitude that the men had to fly to ensure incredible accuracy to destroy their very important targets, yet not have high collateral damage. A failed mission would result in far higher casualties during the several days of Normandy landings. But at such low range their were sitting ducks for the flak sent into the air and the low altitude flying German night bombers. The target that night was the heavily protected rail marshalling yards at Cambrai in Northern France, shown below and marked with the letter... "A". In the upper left of the map you can see the southern tip of England. On the very night of this attack, Andy's old comrades in the Royal Winnipeg Riffles were carrying out their duties in the area marked with the letter "B" above. Perhaps they saw the formation of planes flying overhead or Andy observed them enroute to his own destiny. We may never know! It was very soon after leaving the English Channel behind them that the crew became "coned." An Air Force term meaning they had been caught in the blinding lights of several land based high powered search lights. The 22 year old pilot quickly put the plane into a dive and then reversed directions upwards and escaped the lights but no sooner had this been avoided when the crew found themselves in the cross hairs of a enemy night bomber called the Junkers JU 88. These planes had heavy cannons on board and also the ability to not only fly low as a routine, but could fire almost straight up and thus at the soft belly of the Lancaster MK X, a plane that unlike many other types, did not have and under belly gunner and turret. This plane was made at Malton Ontario in 1945 and never saw combat service. Years later it was repainted in honour of the plane involved in this story. It is an exact replica of Andy's plane. Note the upper mid body turret position were Andy was stationed, and the turret at the back of the plane were his buddy Pat served. On Friday I'll return with more on the incredibly amazing story of Andy and his crew.
2019-04-25T17:58:48Z
http://www.canadianmedalofhonor.com/sunday-evenings-blogs/archives/03-2014
6.08pm: We are now winding up the live blog for today, but we will be back tomorrow for evidence from former Met commissioner Sir Paul Stephenson, ex-assistant commissioner John Yates, former assistant commissioner Andy Hayman and ex-deputy assistant commissioner Peter Clarke. In the meantime, you can read the latest James Murdoch and Leveson inquiry developments on the MediaGuardian homepage and our Leveson page. You can read Wolff's full comment piece here. 4.49pm: Maberly has now finished giving evidence. The inquiry has finished for the day and will resume tomorrow at 10am. 4.48pm: Jay has one more question - the Mail on Sunday was notified that four people had been targeted by Mulcaire. Does Maberly recall that? "I'm aware that was the case," says Maberly. "Do you know why they received arguably different treatment from others?" Maberly says: "This was probably at a period of time we were trying to contact potential victims. At the time we were concentrating on those ... where the best evidence laid in relation to the investigation." 4.38pm: Maberly is asked by Jay about the significance of the "corner names". He says some of the people he wanted to speak to, their names appeared in the corner of Mulcaire's files. One of the mobile numbers of the three journalists he wanted to speak to appeared in Mulcaire's phone bills, Maberly says. Jay says this is important circumstantial evidence. 4.37pm: "There would have been aspects of the case I would have liked to ask them about. I had no firm evidence of their knowledge of voicemail interception or them tasking Mulcaire." "It would have been the case if we did bring them in for questioning the likelihood is they would have made no comment as did the other two employees of the News of the World. We would have got nowhere." Leveson says it's "all a question of inference". Maberly replies: "We had inference, no evidence." Leveson says he is not sure about that, saying circumstantial evidence is often very valuable evidence. 4.35pm: Maberly says he identified three names, had he sufficient evidence, he would have spoken to. "I accepted the decision the resources were not there to widen the inquiry." He was deployed elsewhere in the anti-terrorist branch. "These were three journalists on the News of the World?" Yes, says Maberly. He says one of them had potentially moved on and was part of another company. 4.32pm: Another list of stats suggests Clive Goodman rang a number once, says Jay. Maberly points out it's one digit away from the number of a member of the royal household, it was a misdial. Jay thanks him. He says he read it late last night and didn't spot it. 4.31pm: Jay asks about the significance of a list of numbers and names. Maberly says they were the consequence of a billing data address, a list of people called by a 2228 number, the number you have there is amount of times they were called. The top line, their voicemail was called 43 times. The 2228 number is Mulcaire's office phone. The top number belongs to a journalist, says Leveson. Leveson reads out another line - the 2228 number accessed the voicemail of Sky Andrew 23 times. 4.28pm: Maberly is asked if the police tried to find out whose phone that was, which desk it was on. "There was the expectation that News International would be keeping that data for its own records." So you were advised these records would exist, asks Jay. "That's correct. In later applications one of my requests was to ask for a list of the desk phones and diagrams as to where people were sitting." Jay says only "one document" was supplied in this regard. "Were you suspicious you were being fobbed off?" Maberly agrees. 4.25pm: Jay asks about other phone numbers which were hacked, which were outside of the royal family. Maberly says they looked at Mulcaire's various office numbers and also the News of the World hub number. The paper also had another number which ended in 312, had appearance of mobile number but was another hub number. It was a low cost number that saved the NoW money. Jay asks if it could be anybody at the News of the World. "Exactly that," replies Maberly. 4.19pm: It was "almost impossible" to know if someone was calling to access your voicemail messages, because that would just show as a call, says Maberly. "That's why we needed to concentrate on this Vampire data." 4.15pm: Maberly mentions "double whacking" – part of Fleet Street folk lore – where one person rings your phone and engages it, someone else rings your phone and is directed into your voicemail. You interrupt the voicemail and put in a pin number. He says Mulcaire was "much more sophisticated – changing people's pin numbers, resetting them by calling into service providers, he had knowledge of the language they would use, it was clear he had a knowledge of the different companies' systems in order to be able to do so." 4.12pm: Maberly talks about "Vampire" data, which refers to Vodafone's diagnostic tool that could check when or how voicemails were accessed and so on. 4.09pm: The inquiry has resumed and Detective Inspector Mark Maberly is giving evidence. 4.05pm: Surtees has now finished giving eviidence and the inquiry is taking a short break. 3.53pm: Surtees had suggested outsourcing the remainder of the phone-hacking investigation to another part of the Met on 31 May 2006, and in September or October he did this again. However, this suggestion was never taken up. 3.50pm: Surtees says the decision was made in September or October 2006 by DAC Peter Clarke not to expand the scope of the phone-hacking inquiry. "I can't recall being at a meeting. The decision was subsequently communicated to me," he says. Jay asks how he felt about that. "Had I been concerned about the legitimacy or otherwise of that decision I would have taken it elsewhere. I am clearly alive to the fact we have lines of investigation that have not been pursued in this case. The lines of investigation could have been pursued and as a detective I would have liked to pursue them." 3.49pm: Surtees says there had to be evidence of unlawful activity, rather than just being included in Mulcaire's papers, before they contacted hacking victims. He adds that some of the MPs, military and police victims they did contact, including Tessa Jowell, expressed "shock, incredulity and surprise" but declined to assist with the prosecution. 3.41pm: Surtees said he would have liked to investigate further but was on a number of other investigations, including some of the 72 anti-terrorist investigations. "In terms of what I would liked to have done, coupled with the other investigations I was involved in, I knew where my priorities lay – serious threat to life investigations." 3.36pm: The inquiry has now resumed. Robert Jay QC asks Surtees about the "corner names" in Mulcaire's notes. Jay:" Did you not think it likely these corner names might be commissioning Mulcaire, and would be aware of his tradecraft?" Surtees: "Potentially, yes. I know he was supplying journalists with his product. The issue was whether the journalists knew how he was obtaining that product. Or whether they were simply blindly receiving product." 3.29pm: The inquiry is now taking a short break. 3.29pm: Was Mulcaire's whole week spent in illegal activity? "I don't know," says Surtees. He thinks it was substantial amount of time, and also research activity going on. "That may well have been legitimate, open source research, and other, perhaps, nefarious research. Whether that would have breached the criminal law, I don't know." Given scale of the payments by NI to Mulcaire, Surtees confirms to Jay he was "disappointed" that only £12,300 was forfeited by Mulcaire at court. 3.27pm: It was clear that Mulcaire been working for NI for a number of years, resulting in "substantial cash payments", says Surtees. Jay asks if these were limited to £12,300. Surtees replies: "No. From memory he was on a wage of £100,000-plus a year, and I saw a number of other invoices where he was individually paid for stories. I saw one for £7,000 and one or two others also." 3.25pm: Jay asks: Did the inquiry not go beyond Goodman and Mulcaire? "It's very difficult because I didn't have telephone numbers as the start point," says Surtees. But you did know that people were ringing in from the News of the World hub number? Yes, says Williams. "In treble figures. Hundreds of times." And outside of the royal family, he confirms. Was that fuelling your suspicions that others outside of Goodman at NI might be involved in this conspiracy? "Yes." 3.19pm: Jay says he has been asked by News International to refute the suggestion by one officer that there was the fear of "some form of violence" against them which NI said was not the case. Does Surtees accept that, asks Jay. "Very difficult for me to take a view either way. The information relayed to me is in my statement," he says. Of the search, Surtees concludes: "The moment was lost. It was gone." 3.17pm: "A number of editors challenged the officers over the legality of their entry into News International," says Surtees. "They were asked to go into a conference room until lawyers could arrive and challenge [their entry]. It was described to me as a 'tense stand-off' by the officer leading the search." The forensic management team was also unable to enter the building. "Our officers were effectively surrounded and photographed and not assisted in any way shape or form. The search was curtailed and did not go to the extent I wanted it too," says Surtees. 3.13pm: But Surtees says there was "some real difficulty" conducting the search. Four officers got in to News International, the rest were barred. "We got to the desk of Goodman, seized some material. There was a safe on the desk, which was unopened," says Surtees. His officers were surrounded by News International staff and photographers from other papers who started taking pictures of them. 3.09pm: Surtees is now talking about his search strategy of News International. He says he was aware of limitations on what he could search for, especially when likely to find journalistic material. The police search of News International focused on non-journalistic materials. "I wanted to search the desk, I wanted to search the financial areas, I wanted to find who was involved in this illegal activity," says Surtees. "Despite suggestions that it would be difficult under section 8 [of Pace] and not possible I sought to do that and obtained the section 8 warrant." 2.55pm: Surtees suggested a separate investigation outside of the anti-terrorist branch when it became clear hacking victims were also outside of the royal family. The proposal, made on 31 May 2006, was not taken up. 2.51pm: Jay asks, in May 2006, if Surtees suspected activities were going on beyond Goodman. "Yes", says Surtees. Surtees said he was informed by Vodafone of suspicious activity: a man ringing into Vodafone using name Paul Williams [an alias of Glenn Mulcaire]. 2.49pm: Jay asks Surtees what evidence he believed was required to proceed with arrest. Surtees said had to prove voicemail was accessed and person had listened to it – a period of 10 to 14 seconds – before an offence was committed. And the message had to be listened to before the intended recipient had heard it. James Murdoch has stepped down as chairman of News International, the publisher of the Sun and Times, in an internal News Corporation reshuffle. Wednesday's move sees him give up responsibility for News Corp's crisis-hit British newspaper operation as he completes his relocation to New York. The man once seen as his father Rupert Murdoch's automatic heir at the top of News Corp retains existing responsibility for "global television", overseeing busineses including the company's 39% stake in BSkyB, Sky-branded pay-TV companies in Europe and Star in Asia – and only gains the opportunity to become involved with the company's US Fox television operation as he settles in across the Atlantic. James Murdoch's managerial move away from News International explains why he was not in London to help oversee the launch of the Sun's Sunday edition, which has been personally supervised by his father. Friends say he has been eager to leave the UK and drop responsibility for the Wapping newspapers for several months as the phone hacking scandal enveloped the London outpost of the organisation. He has faced repeated questions over what he knew about the extent of phone-hacking at the News of the World. Although the hacking is known to have gone on until 2006, before Murdoch arrived, he presided over a period in 2009 and 2010 where News International denied again and again that phone-hacking was more widespread than the activities of a "single rogue" reporter. News International, meanwhile, becomes the only newspaper unit of the company not to report directly to a man named Murdoch. News International chief executive Tom Mockridge will now report to Chase Carey, the US television executive who is the company's number two, its president and chief operating officer. By contrast those who run Dow Jones, the Wall Street Journal publisher, and News Ltd, the Australian newspaper operation, both report directly to Rupert Murdoch. James Murdoch took up the job overseeing News International in December 2007, when he joined News Corp from BSkyB, where he had been chief executive. At the time he also became the chief executive for News Corporation Europe and Asia, responsibilities which he retains. 2.45pm: Surtees says it's not as simple as looking at somebody's phone bill. Leveson says he realises that, he was wondering what the phone companies could do. We move on. 2.43pm: Jay is asking Surtees about the list of 418 potential victims, and how and whether the police could have checked if their voicemails had also been hacked. Surtees says it would have been "virtually impossible" without a suspect in mind. So could have done it with Mulcaire and Goodman, but not other numbers. Jay argues that actually it's not that difficult. "It's quite simple isn't it?" Leveson doesn't think it's that difficult either. 2.42pm: Detective Chief Superintendent Keith Surtees takes the stand. 2.41pm: Williams has now finished giving evidence. 2.40pm: Williams says he would like to assure Leveson that "we absolutely put a lot of effort into that investigation with the best of intentions. We were absolutely not influenced by any of the things that have been suggested and what your inquiry is about". So, James Murdoch has effectively been demoted, or more likely, can't wait to drop the Brit newspapers that caused him so much aggro. 2.37pm: Williams says he would like to assure Leveson that "we absolutely put a lot of effort into that investigation with the best of intentions. We were absolutely not influenced by any of the things that have been suggested and what your inquiry is about". 2.20pm: Williams is asked if he read the Guardian's revelations about News International's settlement with Gordon Taylor in July 2009. He says he did, and it was based on existing information. "There was no intention to hide anything," he adds. 2.31pm: Leveson tells Williams he is not suggesting he has been involved "in some inappropriate relationship which has caused you to backtrack on an investigation". "But I am sure you will understand the concern that decisions taken in the heat of the terrible events of 2006 - and I'm not now talking about the arrests but the other work of your department - are very readily understandable. "But it's quite difficult to translate some of those perfectly legitimate decisions into a construct where we now know the facts from the documents and say that there was nothing there at all. "The risk is people might perceive your reactions to these issues encourages inappropriate inferences to be drawn. "That is the concern I have got to address because it's critical the public has confidence in the police. The consequence of an approach that may be justified for one reason and then justified again for a slightly different reason if it becomes unpicked you have to start from scratch, which is exactly what has happened." 2.18pm: The Leveson inquiry has now started again and Detective Chief Superintendent Philip Williams has resumed giving evidence. News Corporation today announced that, following his relocation to the company's headquarters in New York, James Murdoch, deputy chief operating officer, has relinquished his position as executive chairman of News International, its UK publishing unit. Tom Mockridge, chief executive officer of News International, will continue in his post and will report to News Corporation president and COO Chase Carey. "We are all grateful for James' leadership at News International and across Europe and Asia, where he has made lasting contributions to the group's strategy in paid digital content and its efforts to improve and enhance governance programs," said Rupert Murdoch, chairman and chief executive officer, News Corporation. "He has demonstrated leadership and continues to create great value at Star TV, Sky Deutschland, Sky Italia, and BSkyB. Now that he has moved to New York, James will continue to assume a variety of essential corporate leadership mandates, with particular focus on important pay-TV businesses and broader international operations." "I deeply appreciate the dedication of my many talented colleagues at News International who work tirelessly to inform the public and am confident about the tremendous momentum we have achieved under the leadership of my father and Tom Mockridge," said James Murdoch. "With the successful launch of the Sun on Sunday and new business practices in place across all titles, News International is now in a strong position to build on its successes in the future. As deputy chief operating Officer, I look forward to expanding my commitment to News Corporation's international television businesses and other key initiatives across the company." 2.12pm: We've just been told that James Murdoch is to step down as executive chairman of News International. More details as we get them. 1.05pm: The inquiry has now broken for lunch and is expected to resume at 2pm. 1.04pm: Leveson asks Williams to look back at his original idea of preventing the abuse of phones, adding that the targets could have been notified. He suggests Williams might have thought of visiting News International offices and "reading the riot act" to make sure it did not happen again. Williams says it did not pass through his mind. Did I think to go and speak to senior executives at the News of the World? No I didn't. Not because I was avoiding anything but because I had thought I had made it very clear not just to them but to any organisation that might consider doing this. If you are doing this it is clearly wrong and you are going to prison. The judge asks him if he thinks Mr Justice Vos had the full picture when sentencing Mulcaire and Goodman. Williams replies: "I dearly wish they had pleaded not guilty. The prosecution case had been put together with all of this material, it would have been tested in court. It would have been plain to see, that's what we were preparing for." 1.00pm: Jay asks if Williams was angry when he heard News International's "one rogue reporter" line. "I was realistic: it was a company protecting its reputation," replies Williams. 12.59pm: Asked why News International editors were not brought in for questioning, Williams said he didn't do it because of the fear it would yield a "no comment" response. Better to do questioning from a position of strength, he says. He says he was not frustrated by DAC Clarke's decision not to progress with the investigation. 12.56pm: Jay asks: "Was an unhealthy close relationship between the police and News International a factor in stifling this investigation?" Williams replies: "I don't think it was a factor at all." He adds: "No-one in my team had any contact with any of the newspapers, I can assure you. At no time did it ever influence the direction we went in with that investigation." Williams adds: "I can assure you if I had wanted to I could have stopped this investigation much earlier. It was my intention to make this very public." He says he has "absolute confidence" in DAC Clarke and describes him as the "most professional man I've ever worked with". 12.55pm: Jay asks if calling in News of the World executives would have been a "fishing expedition", as John Yates has suggested. Williams says: "You need to work from knowledge." 12.51pm: In September 2006, Brooks, then editor of the Sun, was told by the police she was being hacked. Jay says that reason Met asked if Brooks (or "RW" as she is apparently referred to in the email) "wanted to take it further" was because she had been a victim of hacking. 12.47pm: Jay asks whether the police asked the News of the World for a list of its employees whose first names appeared in the corners of Mulcaire's notebook. "We did not: it would have been a major step change in the investigation," says Williams. Jay says it would not have represented a shift from "first to fourth gear ... surely the News of the World would have helped to this extent, maybe even limiting it to a particular desk?" Williams reiterates: "To put together a criminal investigation I would not just use that facet, there would be a whole range of things I would want to put together." Rebekah Brooks was today embroiled in an extraordinary row with Scotland Yard over her alleged treatment of a police horse. The Met has accused the former News International chief executive of returning Raisa, 22, in a "poor condition" after the force loaned her the steed for almost two years. But Ms Brooks's husband Charlie, a renowned racehorse trainer, today hit back, insisting the horse had been impeccably treated. He said: "I have been around and looked after horses all my life and I am confident that I know more about caring for them than people at the Metropolitan police." 12.44pm: Jay says the email would appear to suggest Mulcaire was contacting News of the World before and after illegal accesses. "Pretty good circumstantial evidence", is how Jay describes it. Williams says he does not know what the email is referring to in this regard. Jay says: "Whatever this means there was circumstantial evidence by Mulcaire on behalf by someone at the News of the World other than Goodman. Would you agree with that?" "Yes," says Williams. Three times. Borne out by phone records and call data? "Yes." Jay says Williams had corner names on notebooks, call data, all the material in notebooks itself and "basic common sense" which was "more than a springboard for further investigation" and possible arrests. Williams: "I agree ... I come back to the decision was we were not going to do that." 12.43pm: Jay asks if the email was sent before the decision not to widen the investigation. Williams confirms that this is the case. 12.42pm: Williams says Rebekah Brooks (then Wade) was hacked on average twice a week by the NoW from 2005. 12.39pm: Williams says the email's assertion that there were more than £1m of payments is incorrect. "That figure is wrong. The figure of £1m is not known to me or the investigation team." He adds that Mulcaire had a contract for £105,000 and there may have been other payments. Jay asks if cash payments were more than £200,000. Williams says he doesn't know. Jay responds that that was what Simon Hughes told the inquiry. 3. the only payment records they found were from News Int, ie the NoW retainer and other invoices; they said that over the period they looked at (going way back) there seemed to be over £1m of payments. (a) [This section is unclear] they suggested ... News of the World journalists directly accessing the voicemails (this is what did for Clive). (e) they do have GM's phone records which show sequences of contacts with News of the World before and after accesses ... obviously they don't have the content of the calls ... so this is at best circumstantial. 10: they are going to contact RW [presumed to be a reference to Rebekah Wade] today to see if she wishes to take it further. 12.36pm: Jay is now going through the 15 September 2006 email, read out to the inquiry on Monday, from NI lawyer Tom Crone to the then News of the World editor Andy Coulson, suggesting that information had been apparently relayed to Rebekah Brooks by "cops". He says is not going to ask who the police officer is but wants to know if the information is correct. 12.28pm: Jay puts it to Williams that he had was being "painstakingly cautious" he had "plenty of material" to go to magistrate to say News International wasn't co-operating. "I was thinking to do this properly we would need to go through this material and we would need significantly more resources." Williams says DAC Clarke's decision not to go ahead with the investigation is not contained in any document. How was it reached, asks Jay. Williams replies: "The consistent decision was we were not going to broaden it." He adds that it was it was a resources decision. 12.26pm: Leveson asks Williams if the police had enough evidence to put in front of a magistrate that News International was not being co-operative. Williams says "No, I don't know." He says that the CPS was not consulted. 12.26pm: Williams says in September or October 2006 DAC Clarke decided not to widen the scope of the investigation. 12.23pm: Further investigation would have required "a lot of painstaking research", says Williams. "It's a significant amount of work. Not that I'm saying it's not worth doing. It's a major step change." Jay: "And it's about this point the decision is taken not to expand this investigation, not to take that step change? And the answer from the boss is no, we're not." Williams says he "absolutely understands the reason why not". 12.22pm: Jay continues: "But all you got from solicitors acting from the News of the World was extremely limited, evidentially." He asks if Williams believes the News of the World was obstructive. "Yes," says Williams. Williams continues: "At this juncture, we had reached a stage where if we were going to go further as we speculated there are leads here, there are potential with the names in the corner, this is a step change, a much broader investigation. Is this a much wider, bigger investigation?" 12.19pm: Jay asks if Williams should not have carried out further investigation, including calling in the people whose names appeared in the corner of Mulcaire's notes. "There were absolutely further leads that we could have followed in this investigation," says Williams. You were contemplating that others at the News of the World may have been involved? "Yes," says Williams. 12.18pm: Jay asks why the Met's production order didn't include Goodman's safe and computer. Willliams says it covered "all relevant material". 12.17pm: Leveson tells Wiliams notifying victims it is no different than if the police foiled an armed robbery at a bank before it took place. "I can understand this is not an easy job, Mr Wiliams. If you thought a bank was a potential target of an armed robbery but you foiled it so the bank was never touched would you call the bank a victim of a conspiracy to rob?" asks Leveson. "So why is it any different to those on your list when it is abundantly clear Mr Mulcaire is collecting phone numbers and pin numbers and all this detail, he is probably doing it for someone else and therefore he is conspiring with them probably to use this information to access voicemails?" Williams says: "In hindsight I entirely agree … I totally understand when people look back they think more people should have been informed." @rupertmurdoch RT..about R Brooks saving horse fromglue factory!..be fair boss if it was Cheerie Blair or Sarah Brown wouldSUN run story ? 12.12pm: Williams says he hoped the phone companies would tell customers whose voicemails may have been intercepted. In fact, it took the networks took almost six years to tell customers. 12.11pm: Williams says: "At the time, my mindset was that to be a victim, the voicemail had to be unopened. I was looking for way of making it public." He adds that the strategy of not telling potential victims was not about limiting the story. 12.08pm: Williams confirms the police decided to notify only four categories of victims whose voicemails had been called: MPs, the royal household, the police and the military. 12.03pm: Williams says the mobile companies were not telling him phone hacking was a big problem. Lord Justice Leveson is sceptical how phone companies could ever check what was going on – and might not want to reveal full scale of problem for commercial reasons. Williams says: "I totally accept it was very challenging for them. Some of them couldn't do it. Vodafone and O2 had a better software system." 11.57am: Jay asks why all 418 names in the police list of potential victims were not contacted. All I got there was a snapshot in time of material we happened to receive. There could well be a wider pool of people ... this strategy was aimed at the full potential of those potential victims might be. I was hoping to address that much wider potential, which would have included everybody on that list. Williams's memo at the time said contacting victims would be "resource intensive", and the hacking was concerned with obtaining "salacious gossip". 11.53am: The police sent a production order to all five UK mobile phone companies asking for details on calls to a list of unique voicemail numbers (UVNs). "You were beginning to build up clear picture of access to voicemail by others in the News of the World?" asks Jay. Williams says data was in respect of Goodman and Mulcaire, and "hub number" at the News of the World. "We had information the 'hub number' was calling these unique voicemail numbers," he adds. Williams said police wanted to know if it was Goodman or Mulcaire ringing from there, and what the phone data was behind that hub number. "The number ended 5354," says Jay. "Does that ring a bell?" Williams can't remember. 11.52am: Jay says police adopted an "overly cautious approach" to potential victims given the "persistent pattern of behaviour" by Mulcaire. "Everything he is doing is with the objective [of accessing voicemail]," he adds. 11.42am: Williams says he launched a financial review of Goodman and Mulcaire as the police were considering attempting to show that their assets were the proceeds of crime. He says says the only amount they could definitively prove were Mulcaire's proceeds from phone hacking was the £12,300 payments from Goodman cited in the prosecution. 11.37am: Jay asks Williams about whether there was evidence of the involvement of the News of the World editor, Andy Coulson, or other journalists in phone hacking. We were all aware what the speculation was and how this might be further than these two men because that was part of our discussion whether there might be other defendants. At that time we didn't have evidence. A CPS memo of a meeting in August 2006 said the police did not have evidence Mulcaire was working with other NoW journalists. 11.37am: The inquiry has now resumed. 11.28am: The inquiry is now taking a short break. 11.28am: Jay suggests the evidence of who requested the work was the corner name. Williams says: "It was indicitive, I agree, that could well be the person. From my point of view, as an investigation, I would need to build that case to actually prove that in court." Jay asks: "Did you associate any of the corner names, which were first names, with any employees of the News of the World?" Williams says: "They could be from any organisation." 11.22am: Jay asks if Williams had seen the "corner names" in Mulcaire's notes. He says yes. However, he says to build a case he would need substantive evidence of the identity of the corner names. Jay suggests that was a "pretty strong clue". Williams replies: "That was our supposition. The names in the corner were the person who potentially either instructed them or for whom Mulcaire was doing the work." What was absolutely absent, we didn't see anything coming through into Mr Mulcaire that would say from whoever, I would like you to do whoever. Nor did we see any requests. Nor did we see the outcome of what he did … and how he billed it. 11.19am: Jay points out that the list contained people who would not have been of interest to royal correspondent Goodman, including paedophiles, reporters and others. Williams says he did not recognise many of the names. 11.15am: Surtees instructed a list to be drawn up of those potentially compromised. Williams says the list took about a week to compile, and contained the key names that might be involved. The list contained 418 or 419 names. He says the list was "definitive" – it gave an idea of "potential pool", the scale of the number of victims involved. 11.11am: Jay asks Williams about Mulcaire's police interview. He was asked about the hacking of John Prescott and Joan Hammell. Williams says he was aware that there may have been more targets of interception, but says the challenge was to prove that they were hacked. He adds that Mulcaire was getting information for the media world; it was not clear whether he was using illegal or legal techniques. My mindset was Glenn Mulcaire is getting information presumably for the media world and he may well be using a whole range of different techniques, some of those techniques may well be distasteful to the public but may be lawful. But others may be illegal. 11.11am: Williams says he found out about the "for Neville" email at some time between the arrests in August and the trial in November. 11.10am: Surtees told Williams that News International had been obstructive when the police tried to arrest Goodman. 11.06am: Goodman and Mulcaire were arrested on 8 August 2006. Williams had been on leave and was briefed on arrests by Detective Chief Superintendent Keith Surtees. The next day, the police discovered a plot to blow up nine airliners. 11.04am: Williams wrote a memo saying the number of victims wouldn't make much difference to the sentence, which would be relatively small. Even if he found 100 victims, there would be relatively little difference. @rupertmurdochYou comment on her horse but not on her insider knowledge of a criminal investigation into your company. Have you no shame? 10.58am: At this time, Williams again raised issues of resources. He said it was important to "formally record" increased workload on SO13, with 72 on-oing operations. Jay suggests he was putting down a "firm marker" to his superiors that resources were under pressure. Williams describes it as a "moment of reflection". He said he was "happy with resources" but outlining "context". "Our judgment at that time was a balance of risk and harm, we judged that very much on the potential of what that threat to life might be, judging it against different operations. "But at this stage, for what we were doing, I was satisfied we had enough resources." 10.58am: The police investigation subsequently revealed two interceptions by Goodman, and two by Mulcaire, relating to royal household phones. 10.56am: What Williams wrote in 2006: "I suspect the media world may well be aware of this vulnerability ... more sinister side is knowledge could be used by criminals ... to threaten national security." Williams says he feared it could be a technique used across all media. However, he says "at no time did any of the phone companies once they were aware of the risks did they come back and say this is happening all over our system". Now they are blaming R Brooks from saving an old horse from the glue factory.What next? 10.52am: "This was new to them, they didn't realise this could be done," says Williams of the phone companies. "They are telling us it's news to them but people were able to do this. Their own engineering software, although it could show what we called the rogue numbers coming into the voicemail number, it had difficulty telling them what was going on in the voicemail box. They couldn't tell us if message existed in the voicemail box." They had to use more specialist software to get more accurate picture of what was going in on Jamie Lowther-Pinkerton's voicemail. 10.51am: Williams stresses: "I needed to build my case before I actually confronted the issue." 10.50am: Williams said he could have spoken to Goodman - "that option was open to me but I didn't believe I had enough evidence. He may have said no comment and that would have been the end of the matter." He says he wanted to have "as strong a case as possible ... I didn't believe I had the evidence." Williams said he needed to build his case with the help of the phone companies. Key, or so he thought, was that intercepted message was previously unlistened to. 10.50am: Williams says: "I was aware there was potentially evidence – untested – that some members of the royal household may have been having their unique voicemails intercepted. In terms of it actually being a new unlistened-to message, I hadn't got evidence of that." He adds: "I was not going to consider doing nothing. I very much wanted to do something. Me and my team put in a huge amount of effort maintaining the support of the victims. We wanted to bring this to court to demonstrate it was absolutely a criminal offence and not to be tolerated." 10.47am: Police strategy at the time involved asking Jamie Lowther-Pinkerton, one of the private secretaries to Princes William and Harry, not to pick up a voicemail and see if it was picked up by one of the rogue numbers. Williams says he wanted to save potential victims – the royals – from embarrassment if case came to court. He did not want the content of their phone calls to be revealed. To maintain the confidence of my victim I wanted to be able to assure them if at all possible if they were going to be a victim in my case it would be solely on the fact technically that one of the messages had been intercepted, not the who or what it was about. 10.36am: At this stage, Williams says he alerted his supervisory officers that more resources would be required. I was raising the potential public or media spin that might be put on it that sometimes we are using a sledgehammer to crack a nut, why are we using anti-terrorism officers to investigate this offence that has nothing to do with terrorism. Equally there were valid arguments for why we should retain it. Williams adds that he wanted the inquiry to be kept within SO13 because he feared leaks would jeopardise the operation by warning the suspects and alerting the media. 10.35am: The investigation identified five of six potential hacking victims, all within the royal household. It concluded at the time "This ability was highly unlikely to be limited to Clive Goodman alone. It is probably quite widespread amongst those who would be interested in such access. There is a much wider security issue within the UK and potentially worldwide." 10.34am: Williams says the key to investigation was that the interception took place prior to the recipient listening to the message. He said that was the opinion of the Crown Prosecution Service. My belief is consistently what the law said for this to be a cirmnal offence it had to be a new and unread message. We coined this analogy the 'unopened envelope on a desk'. 10.32am: At a review of the case on 4 April 2006, charges were considered for interception under the Regulation of Investigatory Powers Act, and under the Computer Misuse Act. The latter was later discarded. 10.27am: The private secretaries indicated that they were willing to co-operate with a prosection. If this is possible it is likely to be far more widespread than CG (Clive Goodman), hence serious implications for security confidence in Vodaphone voicemail and perhaps the same for other service providers. Jay says this was "prescient". 10.26am: his was significant says Jay because Vodafone "did not know" this was possible. "At the time that was exactly the position with Vodafone," confirms Williams. He says Vodafone said it was "not possible" to do this. Only because we persisted did they discover that this was possible, says Williams. "This was consistent with other phone companies at this time." 10.23am: Discussions with Vodafone revealed that several numbers were calling in to phones belonging to two private secretaries to Princes William and Harry. One of the numbers was traced to the home phone of News of the World royal editor Clive Goodman. 10.22am: Williams says he was picked as senior investigating officer by Clarke because phone hacking was a "kindred matter", not a core anti-terrorism investigation. He says the first stage of the investigation was "What is actually happening here?" He says it was not known definitively that there had been the interception of voicemails. 10.21am: Operation Caryatid was launched in December 2005 after members of the royal household reported fears that their voicemails had been hacked by the News of the World. 10.19am: SO13 oversaw the 2006 phone-hacking investigation, Operation Caryatid. Williams says SO13 was under "absolutely huge pressure" in relation to its anti-terrorist activities in 2006 following the 7/7 bombings. 10.12am: In 2006, Williams was a member of SO13, the Met's anti-terrorism unit. The head of SO13 at the time was DAC Peter Clarke, who reported to AC Andy Hayman. DAC John Yates was responsible for the specialist crime unit at the time and had no involvement in specialist operations, including SO13. SO13 had four investigation teams. 10.10am: Detective Chief Superintendent Philip Williams takes the stand. 10.06am: The inquiry has begun. Robert Jay QC, counsel to the inquiry, says he will deal with the police investigations into phone hacking in 2006 and 2009. He says the police officers' statements to previous reviews will be used as evidence but they must be redacted before they can be published. 9.54am: Welcome to the Leveson inquiry live blog. After criticism of the police yesterday by Simon Hughes and Chris Jefferies, today the inquiry will hear evidence from serving Met officers Detective Superintendent Philip Williams, who led the original phone-hacking investigation, Detective Inspector Mark Maberly and Detective Chief Superintendent Keith Surtees.
2019-04-21T21:00:38Z
https://www.theguardian.com/media/2012/feb/29/leveson-inquiry-williams-maberly-surtees-live
He was a rich businessman, an outspoken outsider with a love of conspiracy theories. And he was a populist running for president. In 1990, when Donald Trump was still beyond the furthest outskirts of American politics, Stanislaw Tyminski was trying to become the new president of post-communist Poland. He shared something else with the future Trump: nobody in the political elite took Tyminski seriously. That was a mistake. He was the standard-bearer for a virulent right-wing populism that would one day take power in Poland and control the politics of the region. He would be the first in a long line of underestimated buffoons of the post-Cold War era who started us on a devolutionary path leading to Donald Trump. Tyminski’s major error: his political backwardness was a little ahead of its time. In true Trumpian fashion, Stan Tyminski couldn’t have been a more unlikely politician. As a successful businessman in Canada, he had made millions. He proved luckless, however, in Canadian politics. His Libertarian Party never got more than 1% of the vote. In 1990, he decided to return to his native Poland, then preparing for its first free presidential election since the 1920s. A relatively open parliamentary election in 1989, as the Warsaw Pact was beginning to unravel, had produced a solid victory for candidates backed by the independent trade union, Solidarity. Those former dissidents-turned-politicians had been governing for a year, with Solidarity intellectual and pioneering newspaper editor Tadeusz Mazowiecki as prime minister but former Communist general Wojciech Jaruzelski holding the presidency. Now, the general was finally stepping aside. Running in addition to Mazowiecki was former trade union leader Lech Walesa, who had done more than any other Pole to take down the Communist government (and received a Nobel Prize for his efforts). Compared to such political giants, Tyminski was an unknown. All three made promises. Walesa announced that he would provide every Pole with $10,000 to invest in new capitalist enterprises. Mazowiecki swore he’d get the Rolling Stones to perform in Poland. Tyminski had the strangest pitch of all. He carried around a black briefcase inside which, he claimed, was secret information that would blow Polish politics to smithereens. Tyminski managed to get a toehold in national politics because, by November 1990, many Poles were already fed up with the status quo Solidarity had ushered in. They’d suffered the early consequences of the “shock therapy” economic reforms that would soon be introduced across much of Eastern Europe and, after 1991, Russia. Although the Polish economy had finally stabilized, unemployment had, by the end of 1990, shot up from next to nothing to 6.5% and the country’s national income had fallen by more than 11%. Though some were doing well in the new business-friendly environment, the general standard of living had plummeted as part of Poland’s price for entering the global economy. The burden of that had fallen disproportionately on workers in sunset industries, small farmers, and pensioners. Mazowiecki, the face of this new political order, would, like Hillary Clinton many years later, go down to ignominious defeat, while Tyminski surprised everyone by making it into the second round of voting. Garnering support from areas hard hit by the dislocations of economic reform, he squared off against the plainspoken, splenetic Walesa. Tyminski did everything he could to paint his opponent as the consummate insider, a collaborator with the Communist secret police in his youth. “I have a lot of material and I have it here… and some of it is very serious and of a personal nature,” Tyminski told Walesa in a debate on national television, holding that briefcase of his close at hand. Walesa retaliated by accusing him of being a front man for the former communist secret police. Tyminski was forced to admit that his staff did include ex-secret policemen, but he never actually opened that briefcase. Walesa was resoundingly swept into the presidency by an electoral margin of three to one. Stan Tyminski eventually took his wild conspiracy theories and populist pretensions back to Canada, a political has-been. And yet he was prescient in so many ways (including those charges against Walesa, who probably did collaborate briefly with the secret police). The liberal reforms that Eastern Europe implemented after the transformations of 1989 were supposed to be a one-way journey into a future as prosperous and boring as Scandinavia’s. Tyminski, on the other hand, had conjured up a very different, far grimmer future — unpredictable, angry, intolerant, paranoid — the very one that seems to have become our present. Tyminski’s “children” now govern nearly every country in Eastern Europe, and the United States, too, is in the grip of a Tyminski-like leader. Perhaps these illiberal leaders have reached the peak of their influence — or have they? The opposite scenario is too dismal to contemplate: that the political climate has irreversibly changed and liberalism has irrevocably weakened in the U.S., in Eastern Europe, everywhere. Imagine the history of Eastern Europe after 1989 as a train leaving a decrepit station where tasty snacks and interesting reading material aren’t available, the public address system issues garbled announcements, the bathrooms are out of order, and the help desk unstaffed. As the final boarding chimes echo through the station, the passengers pile onto the train. A lucky few are in a first-class car with access to a surprisingly good cafe and plush sleeping compartments, a somewhat larger group in the reserved second-class seats, and everyone else crowded into totally rundown cars with appalling seats. The ultimate destination all of them have been told is a lovely terminal with well-provisioned stores, clean public restrooms, and a responsive administrative system in a city and country equally well run. Think of this as the train of “transition.” Everyone on it seems convinced that they’re en route to a stunning market democracy in a post-Cold War world where political differences and ideological struggles have lost their relevance, where as American political theorist Francis Fukuyama famously put it in 1989, the “end of history” is in sight. “Today,” Fukuyama wrote a couple of years later, “we have trouble imagining a world that is radically better than our own, or a future that is not essentially democratic and capitalist.” Pragmatic decisions are all that’s left, and they’re to be chewed over by policymakers and implemented by bureaucrats. If Eastern Europeans knew what they’d left behind and were fervent about where they were heading, they had little idea about the nature of the journey they were undertaking. German political scientist Ralf Dahrendorf tried to provide a few time stamps for such a transition: six months to create parties and political institutions, six years to establish the basis for a market economy, and 60 years to build a proper civil society. Except for some cranky members of the extreme right and a few Stalinist leftovers, everyone in the region seemed to back this liberal project, seeing it as a ticket into the larger European community. For the first few years, the train of transition rolled along. There was grumbling in the back cars, but everyone was still on board with the overall plan to reach Western Europe or bust. As it happened, the first-class passengers were easily transported to the heart of the sunny West. The second-class passengers barely made it across the border. And the rest didn’t get far beyond that original, disheveled station. When I first traveled across Eastern Europe in 1990, the very year of the Polish presidential election, many of the people I interviewed expected to be living like Viennese or Londoners within five years, a decade at the most. If this was a delusion, it was one partially fueled by the outside advisers who flooded the region in 1990. Planners from the U.S. Agency for International Development, for example, put a five-year window on their assistance package. And for some, the transition did last only a few years because cities like Warsaw in Poland quickly became high-priced locations for international corporate offices and NGOs. So the capital cities of Eastern Europe made the trip west, while smaller cities and towns and, above all, the countryside remained mired in the past. This urban-rural gap mirrored the one that still persists between Western Europe and Eastern Europe. In 1991, according to the World Bank’s figures, Hungary’s per capita gross domestic product was $3,333, Austria’s $22,356. By 2016, Hungary’s had risen to $27,481, while Austria’s stood at $48,004. In other words, though the gap had been narrowed considerably, as with other Eastern European countries — Poland ($27,764), Romania ($22,347), Bulgaria ($20,326) — it had at best been cut in half. The liberal project succeeded in ushering virtually all of Eastern Europe into the European Union. But in the end, because of the persistent gap between expectations and reality, voters began to look around for something different. Stan Tyminski ran for president before unemployment in Poland soared from 6.5% in 1990 to 20% by 2002. In Hungary, Viktor Orbán had far better timing. Orbán was a young lawyer in Budapest in 1988 when he helped found a liberal party that you had to be under 35 to join. Fidesz, the Alliance of Young Democrats, won a commendable 21 seats in the 1990 elections, good enough for a sixth-place showing. Four years later, that country’s former Communist Party (renamed the Socialists) came out on top, while Fidesz dropped a couple spots. What disappointed Orbán far more, however, was the way the Alliance of Free Democrats — the “adult” version of Fidesz — opted to form a coalition government with the Socialists. That was the moment when, having second thoughts about liberalism as a vehicle for his own personal ambitions, he began to transform both Fidesz, which dropped its under-35 requirement, and himself. When economic “reform” shocked Hungary as it had Poland, Orbán recast himself as an increasingly illiberal Hungarian nationalist and his once-liberal party became a pillar of the new right. In 2010, he became prime minister for the second time, a position he’s held for the last seven years. In a remarkable number of ways Orbán anticipated Donald Trump. He reversed his country’s longstanding mistrust of Russia by openly courting its president, Vladimir Putin, and pledging to transform Hungarian politics along the lines of that country’s “illiberal state.” He railed against mainstream journalism, attempted to bend the judiciary (and the constitution) to his will, and rigged the state apparatus to benefit his supporters. In perhaps his most ominous twist, Orbán courted the Hungarian version of the alt-right with relentless anti-immigrant statements and the occasional anti-Semitic gesture. The Polish right wing was so enamored of Orbán’s success that, in 2011, former Prime Minister Jaroslaw Kaczynski announced that “the day will come when we will succeed and we will have Budapest in Warsaw.” Four years later, his Law and Justice Party took power on a mixed platform of populism and conspiracy theories reminiscent of Stan Tyminski’s. Now, Donald Trump is constructing Budapest in Washington D.C., as he unwittingly follows Tyminski’s and Orbán’s trajectory. The reality TV star cultivated his status as an extreme outsider. During the Obama era, he identified a political opportunity on the right and, in September 2009, switched from the Democratic to the Republican Party. Seven years later, having combined outlandish conspiracy theories (think: birtherism) with an astute critique of liberal elites, he squeaked into power. He surely owes something to native (and nativist) traditions from Huey Long to Ross Perot, but he shares so much more with his compatriots across the Atlantic. That transatlantic commonality begins with his canny exploitation of the gap between expectation and reality. The United States, like Eastern Europe, was going through its own “economic transition” in the 1990s. Millions of Americans expected the new economy — the global economy, the digital economy, the service economy, the sharing economy — to produce new jobs, better jobs. And it did generate enormous wealth, but mostly, as in Eastern Europe, for a narrow, highly urbanized slice of the population. Income inequality has increased so dramatically that the American world now resembles the nineteenth-century Gilded Age. In the eras of Presidents Franklin Delano Roosevelt and Lyndon Johnson, the liberal project meant government intervention in the economy on behalf of working Americans and the disadvantaged. By the time Bill Clinton took the White House in 1993, the focus of the “new” Democrats was already shifting to global free-trade deals that would only accelerate the country’s loss of manufacturing jobs and a harsh vision of social spending represented most starkly by Clinton’s grim version of welfare reform. Meanwhile, the increasing coziness of the “new” Democratic Party and Wall Street would lead to significant financial deregulation that, in turn, would produce an economic meltdown in 2007-2008. Although Barack Obama would prove progressive on some issues, he would also embrace Clintonesque positions on trade, social welfare, and Wall Street. As in Eastern Europe, such a liberal project would leave many people behind. So no one should have been surprised that these disappointed voters would eventually seek their revenge at the polls, as traditional Democrats in working-class neighborhoods began to vote Republican. Aided by “dark money” and his dark mutterings about migrants, Mexicans, and Muslims, Trump rode a wave of Eastern European-style disenchantment to the Oval Office. Now, he’s taking his revenge not just against the neoliberalism of the Clinton and Obama years, but the entire twentieth-century liberal understanding of the state. Conservative anti-tax advocate Grover Norquist once remarked that his dream was not “to abolish government” but “to reduce it to the size where I can drag it into the bathroom and drown it in the bathtub.” The question today in both Eastern Europe and the U.S. is: Have Trump, Orbán, and others shrunk liberalism to such a degree that they can now drown it in that bathtub? Those wielding political metaphors love the idea of oscillation. You know, the pendulum swinging back and forth, the tide ebbing and flowing, voters opting for one political flavor and then, surfeited, returning to what they once rejected. So far, voters in Eastern Europe haven’t shown any signs of wanting to return to the liberal politics that had delivered their countries to the promised land of European Union (EU) membership. In Hungary, Fidesz continues to lead the polls as the 2018 elections approach. The right-wing Law and Justice Party in Poland has only increased its popularity since it captured the state in elections two years ago. Indeed, the rest of the region is following their lead. In October, the party of billionaire right-wing businessman Andrej Babiš captured the most votes in the Czech elections. Boyko Borisov, a populist with an authoritarian bent, has returned to power in Bulgaria, while nationalists are back in charge in Croatia. The anti-immigrant and anti-Muslim leader of Slovakia, Robert Fico, has been prime minister for nine of the last 11 years. (Though governing from the social-democratic left, Fico has exhibited distinctly authoritarian tendencies.) These leaders have different political philosophies and operate in different cultural contexts, but they all share one thing: an aversion to the liberal project. Further out on the fringes, the Eastern European alt-right flourishes. This year, neo-Nazis flew the American flag in a February march in Croatia’s capital Zagreb to celebrate Donald Trump; 60,000 far-right nationalists gathered for Poland’s annual independence day in November; and Hungary has become a virtual mecca for extremists. As right-wing authoritarians gain mainstream appeal, those further to the right are courting greater visibility. In Europe, there is still a counterweight to this rejection of the liberal project: the European Union. It has, for instance, strongly censured the Polish and Hungarian governments for their illiberal policies, and it still carries real weight. Unless the EU manages to transform its economic policies in a way that stops favoring rich countries and wealthy individuals, however, it’s likely to prove incapable of stemming the tide of reaction. New French President Emmanuel Macron has offered some interesting proposals — from an EU-wide financial transactions tax to the taxation of digital companies — that might temper some of the galloping greed. But such EU reforms won’t boost the fortunes of liberalism in Eastern Europe unless that organization begins to address the persistent divide between the two parts of the continent and (as in the United States) between thriving metropolitan centers and those left behind in more rural areas. In America, Donald Trump remains a deeply unpopular president. Widespread political resistance to his administration and the Republican Congress has already claimed some early victories. But thanks to the Supreme Court’s Citizens United decision in 2010, rich, right-wing, anti-liberal individuals and foundations have had an outsized impact on politics. Buoyed by the support of the Koch brothers and others, the Trump administration will do everything possible over the next three years to bankrupt the economy through tax “reform,” pack the courts with anti-liberal judges, shed federal personnel, gut federal regulations, and otherwise ensure that the government it hands to its successor will be as close to drowned as possible. When it comes to this version of “populism,” Eastern Europe led the way. The question now is: Will it again? If anti-Trump forces here don’t address persistent voter disgust with the status quo, the Eastern European example offers a grim glimpse of a possible American future as right-wing libertarians, intolerant nationalists, and alt-right extremists secure their lock on the policy apparatus. Waiting for the “inevitable” pendulum swing of politics is like waiting for Godot. The political scene will not regain equilibrium by itself. In Eastern Europe, as in the United States, the opposition has to jettison those elements of the liberal project that have proven self-defeating — the economics of inequality and the politics of collusion with the powerful — and offer a genuine antidote to right-wing populists. If not, you might as well slap a do-not-resuscitate order on liberalism, kiss social welfare goodbye, and brace yourself for a very mean season ahead. This entry was posted in Banana republic, Economic fundamentals, Free markets and their discontents, Globalization, Guest Post, Income disparity, Politics, Social policy, The destruction of the middle class on December 6, 2017 by Yves Smith. The elite destroyed their own legitimacy. They transformed economic orthodoxy from the post-war Keynesian system, although imperfect, into neoliberalism, which is little more than a justification for the rich to loot and pillage the rest of us. There is simply too much money flowing to the top 1% and to a lesser extent, the top 10%, and too little for the bottom 90%. They created a mess out of their own making. Had they kept the Keynesian system, with wages rising in line with productivity, they would have had a faster growing economy. Instead they’ve chosen an oversized slice of a much smaller and likely shrinking pie. They are now facing a legitimacy crisis. The Chinese have a term for what they called “Performance Legitimacy” – namely the elites are in power due to their ability to deliver economic gains. The CCP in China has tried moving away from that. In reality, no government ever can. All regimes and ideologies are held up to that standard. Neoliberalism, having utterly failed the working class by design is facing a big legitimacy crisis and by extension, so is capitalism as a whole. Destroying worker rights, austerity, “no cash society” to enrich the banks, attacking civil liberties, deregulating finance even more, privatization. When this inevitably becomes exposed, of course people are going to be desperate for an alternative. Can anyone blame people for getting desperate? If the liberal order dies, it will be because the greed of its own elites destroyed it and because it was unworthy of being saved in the eyes of the general public. “The term neoliberalism was coined at a meeting in Paris in 1938. Among the delegates were two men who came to define the ideology, Ludwig von Mises and Friedrich Hayek. Both exiles from Austria, they saw social democracy, exemplified by Franklin Roosevelt’s New Deal and the gradual development of Britain’s welfare state, as manifestations of a collectivism that occupied the same spectrum as nazism and communism. When the term re-appeared in the 1980s in connection with Augusto Pinochet’s economic reforms in Chile, the usage of the term had shifted. It had not only become a term with negative connotations employed principally by critics of market reform, but it also had shifted in meaning from a moderate form of liberalism to a more radical and laissez-faire capitalist set of ideas. Scholars now tended to associate it with the theories of economists Friedrich Hayek, Milton Friedman and James M. Buchanan, along with politicians and policy-makers such as Margaret Thatcher, Ronald Reagan and Alan Greenspan. The movement’s rich backers funded a series of thinktanks which would refine and promote the ideology. Among them were the American Enterprise Institute, the Heritage Foundation, the Cato Institute, the Institute of Economic Affairs, the Centre for Policy Studies and the Adam Smith Institute. They also financed academic positions and departments, particularly at the universities of Chicago and Virginia. I may be wrong, but I understood that Eastern Europe, though racist and xenophobic, was not yet planning on jettisoning its social safety net, as the GOP under Trump is. As liquidity dried up in global financial markets, investors retreated to ‘safer’ havens in the core capitalist states. Faced with this situation, the openness of the CEE economies turned out to be a recipe for disaster. The combination of relatively small economies (except Poland), together with extreme openness to foreign capital and high dependency on exports left the region highly exposed to the effectsof the credit crunch.The Hungarian economy fitted these descriptions perfectly. Its economic openness is extremely high: its proportion of trade in total GDP amounted to 161.4 per cent in 2008 (the highest in the EU-10) and 70 per cent of this trade went to advanced economies. . . These concerns boiled over in October 2008 when foreign investors sold more than US $ 2 billion of Hungarian government securities (nearly 5 per cent of Hungary’s foreign-owned securities at the time) within a couple of days. Government officials and policymakers in Budapest now admitted that Hungary faced the threat of a ‘run on the forint’. . . A classic ‘shock doctrine’ intervention, aka “when there’s blood on the ground, buy property.” The far right capitalized on the public anger over this in 2010, it looks like. Post-Com states is a very complicated situation, and TBH, I don’t think that liberalism or neo-lib is the only thing – or even THE thing – that can be blamed for the state they are in. – Elites (political and economical)in most of the countries there either were, or have direct connections with the elites (often it includes secret service people, but given that majority of records on those were destroyed by the regimes very quickly, it’s often hard to prove either way) of the communist regimes. – the economies of the countries were backward, with the exception of the Czech Republic. Comparing even post-WW2 Germany with any of these countries (again, with the possible exception of CZ) is just dumb. That’s not to say the people in those countries did not expect to get up to the level of “West” well within one generation, but realistically, it was never going to be the case. – because, if nothing else, the smartest and the most enterpreneurous from these countries emigrated en-masse – even before them joining the EU. More than 1m Poles emigrated (one interesting consequence of Brexit may be that more than a few might be returning to Poland – where it’s unlikely they would vote the current ruling party). In some of the Baltics the population shrunk by >10% by immigration, which was pretty much concentrated in the below-35 age. One exception to this is again Czech Republic, where the migration was much smaller (pretty much most of the emigration there occurred well before the ascension to the EU). – the communist regime heritage made (and still makes) it to get anything close to rule-of-law. Post-revolutions, majority of the judges were still the ones put in (and very often with direct links) to communist regimes, resulting in widespread corruption of the judiciary. It’s not uncommon in some countries for all lower courts to do very weird calls even now, which are subsequently overturned by constitutional (as in direct contradictions to standing laws) – if the claimant has the resources to push it all the way. – In general, there is still a substantial impact of the communist regimes on what is felt by the society to be acceptable (for example, the “tax is theft” is a statement that a number of middle class people would strongly agree, even those who say they are left leaning. Probably much higher than in the “West” – and that despite the direct tax being often smaller). It’s quite interesting, as in some cases the communist “all for society” (which, from practical perspective was a lie, and known to be a lie) got very quickly replaced by Thatcherite “there’s no society, only individuals” at just about all levels. That all said, there are also interesting “exceptions” to the rule. For example, Fico in Slovakia (mentioned in the article) runs the largest party, got an absolute majority in the previous parliament etc. Yet, when he tried to concentrate even more power by running for president, he resoundingly (by almost 20%) lost to a pro-EU, liberal-views (not neo-lib) Kiska, on a turnout only slightly less than in parliamentary elections. Also, in recent regional elections in Slovakia, Fico’s party lost rather badly – which was the second large headline there from the elections, with the fist one being that an openly facist “zupan” (basically regional governor) elected the last time summarily sent out. There are presidential elections in the Czech Republic next Jan, and it will be interesting to see whether the current populist pro-Russia (his presidential staff has received Russian money openly in some cases) president Zeman will win or not – at the moment it looks like 50/50. This is interesting especially since the last elections (October this year) put in power the populist billionaire Babis (also mentioned in the article), and it looks like his minority government will be propped by an interesting coalition of Stalinist communists of KSCM (communist party of CZ) and pretty much neo-fasicst SPD (direct democracy party) – which have only about three common items in their manifestos (leave EU, leave NATO, institute direct referendums), none of which is in Babis’ manifesto. Thanks, vlade. As I read the article, I was reminded that when Trump’s chances of victory were rising, some commenters and I were comparing him to Berlusconi. Berlusconi, in some respects, is still a better model for what Trump is: well-connected real estate developer, media tycoon, serial divorcer with current (indifferent?) model-wife. And I am glad that you pointed out the mass exodus, caused by the economic dislocations and the neoliberal faith of the elites: Certainly, Lithuania, which recently may or may not have stabilized population-wise, saw its population plummet. Even the Russians couldn’t pull off that trick. And, possibly because their neighbors have all devolved into nationalist-neoliberal states, including Belarus, the Lithuanians have not fallen quite as badly for the fog of nationalism. Or I may be thoroughly biased because the G family originated way out in the Lithuanian countryside in one of those tragic towns that once had a little synagogue and even a Lutheran church. Once upon a time. A reminder of how all of these tendencies like lack of justice, economic experiments, appeals to so-called Christian morality, fantasies of a glorious national past will end. To an extent, the current situation is quite paradoxical, because on one hand you have adoration of Putin and what he did with Russia, while at the same time fear of it (most extreme probably in Poland). Instability as a social policy is a fundamental tenet of capitalism. It is a virtue. By making the working classes’ situations precarious: 1) it turns some of them into low paid but tax paying entrepreneurs with a dubious ability to survive in the elder years 2) instils market discipline in another category of workers allowing them to accept immiseration wages 3) and can ultimately teach those that are slowly wasting away that they have failed the system. I suppose there is a parallel between the utter disruption to lives of ordinary workers in former Socialist countries of Eastern Europe, abetted by promises that can’t be fulfilled by capitalism to many of those same workers, and the disruption to the lives of President Trump’s supporters who have seen most of their working class safety net vapourised in the name of markets. Although, I would say the USA in on the leading edge of seeing just what intolerances the population as a whole can tolerate and still maintain some sort of coherent social order. I would image capitalists of other countries are eagerly anticipating the outcomes but are thankful that the USA is leading this new and progressive economic experiment so that they don’t have to. “widening inequality and failed economic promises pave the way for reactionary politics.” … a feature, not a bug. …see – hear…I would add I suspect oligarchy would just as soon be rid of those unable to enhance GDP – growth…nothing but a burden upon “society” and in particular government taxes…which need be directed towards donors to enablers…. …simple to “follow the $$$$” (“tax reform”) to trump donars…they don’t even bother hide it at this point. The Trump phenomenon is is not exactly new, either: “Nineteenth-century Germans showed how the Volk, or the people became a sentimental refuge from the arduous experience of modernity; many sank deeper into resentment and hatred of the existing order while waiting for true national grandeur….[So, more recently] in the very places where secular modernity arose, with ideas that were then universally established–individualism (against the significance of social relations), the cult of efficiency and utility (against the ethic of honour), and the normalization of self-interest–the mythic Volk has reappeared as a spur to solidarity and action against real and imagined enemies. “But nationalism is, more than ever before a mystification, if not a dangerous fraud with its promise of making a country ‘great again’ and its demonization of the ‘other’; it conceals the real conditions of existence, and the true origins of suffering, even as it seeks to replicate the comforting balm of transcendental ideals within a bleak earthly horizon.”– from Pankaj Mishra’s Age of Anger: A History of the Present. Mishra notes that Gandhi’s assassin belonged to Modi’s party, and Modi, Berlusconi, etc. are symptoms of this same disease. Is about work and a paycheck. That is what capitalism is for us, a paycheck. Then what do you have but citizenship? When the paychecks run out. You’d know what to do with it. in Concentration Camps of the GPU. Fair to everybody mean what? Trillions were given to Wall St. workers were thrown from their homes. Cash used to buy homes. to maintain the chosen’s profits. “Hell to pay.” Some people say. Maybe all will be lost. Good clearly written and fact based article.
2019-04-23T13:55:04Z
https://www.nakedcapitalism.com/2017/12/eastern-europe-showed-how-neoliberalism-produces-reactionary-populism-like-trump.html
Do you ever dream of going back to school but don't have time to do it? Well, here's your chance! At Capella University, there are so many online courses you can take from your computer at your own pace. They offers an opportunity to earn an online degree through distance education to students searching for a recognized online degree program. Capella University is an accredited, fully online university that offers graduate degree programs in business, information technology, education, human services, and psychology, and bachelor’s degree programs in business and information technology. Within those areas, Capella currently offers 82 graduate and undergraduate specializations and 16 certificate programs. The online university currently serves more than 17,900 students from all 50 states and 56 countries. It is committed to providing high-caliber academic excellence and pursuing balanced business growth. Capella University is a wholly-owned subsidiary of Capella Education Company, headquartered in Minneapolis. Capella University is accredited by The Higher Learning Commission and is a member of the North Central Association of Colleges and Schools (NCA), Ncahlc.org. Capella University, 225 South Sixth Street, Ninth Floor, Minneapolis, MN 55402, 1-888-CAPELLA (227-3552), Capella.edu. A lot of people are now using credit cards. With credit cards, you can buy stuff online, buy airline tickets, book hotel reservation, rent a car, or pay unexpected bills such as car repair especially if you don't have cash in hand. Shopping around for a credit card can save you money on interest and fees. You’ll want to find one with features that match your needs. Mint Credit Cards offer a card with an attractive introductory offer and have recently lowered their typical APR. Mint Credit Cards new lower rates. You can get the best credit card offer through credit-cards-mint.co.uk. Have you ever heard about Rebtel? Rebtel which combined PayPerPost makes rebppp. Rebtel Inc provides millions of mobile phone users with the ability to make free or low-priced international calls to more than 36 countries around the world, reaching over 1.3 Billion people. It allows its customers to make international calls on the go, regardless of the model of their phone, without any use of special hardware or downloads.Signing up in Rebtel is very easy and absolutely free. In addition, there is no fee to sign up to use the service. Rebtel members are entitled to ten free international calls every month free of charge. If you are a Rebtel users, you do not need to sit anywhere near a PC or use a WiFi connection to make low cost or free international calls call. Hurray! My new domain is finally up and running. Gosh, I almost gave up on it but dear hubby always encourage me to give it another try. So I went ahead and search Google for help. Luckily, I found a great site that has a very detailed information on it and he even included some screenshots on Godaddy CPanel. A lot of people are now using credit cards. A credit card makes it easy to buy something now and pay for it later. It's much safer to use a credit card than to carry around cash. If you lose your credit card, you can ask your credit card company to cancel your card, and no one else can use it. Credit cards are also convenient. You can use them to make hotel, car rental and other reservations. You can buy items over the phone or online. You can also use credit cards for emergencies, like unexpected car repairs, when you don't have the cash to cover the expenses. Finally, using a credit card gives you a credit history, which helps to get home loans and other credit in the future. Using credit cards can help you build a positive credit history. This can enhance your ability to receive a private student loan, buy a car, rent an apartment, get a job, and eventually, try to buy a house. CreditCardSearchEngine.com is one of the Internet’s longest running sites for Online Credit Card comparison. It allows consumers, business and students alike to search, compare, and apply for all types of credit card offers, everything from low interest and reward card to cards for people with bad to average credit. CreditCardSearchEngine features offers from leading U.S Credit Card issuers such as J.P. Morgan Chase, Bank of America, Citibank and leading brands Visa, MasterCard, American Express and Discover Card. Okay, since dear hubby and our daughter loved pasta, I'm planning to cook this for our dinner tonight. I just really want to cook something light like this since it's easy to digest. And above all, it only take 30 minutes to cook. 1. Thaw shrimp, if frozen. Rinse shrimp; pat dry with paper towels. Meanwhile, cook linguine according to package directions. 2. Coat an unheated large nonstick skillet with nonstick cooking spray. Preheat over medium-high heat. Add chile peppers and garlic to hot skillet; cook and stir for 1 minute. Add shrimp; cook and stir about 3 minutes more or until shrimp are opaque. Stir in tomatoes, salt, and black pepper; heat through. 3. Drain linguine; toss with shrimp mixture. If desired, sprinkle with Parmesan cheese. Makes 4 servings. Because chilli peppers contain volatile oils that can burn your skin and eyes, avoid direct contact with them as much as possible. When working with chilli peppers, wear plastic or rubber gloves. If your bare hands do touch the peppers, wash your hands and nails well with soap and warm water. Okay, I got my new domain from godaddy.com and it was really cheap! I will just put the link here later once it's up and running. It will probably take another 24 hours before maging live. In the meantime, I was so tired after cleaning our bathroom. Gosh, it's been like ages since the last time we cleaned it. Super tamad ko talaga this time. I mean, as in tamad lol. Looking for accounting software? DSD Business Systems, they are there to help you find the accounting software that is right for you. You can rely on Accounting Software San Diego to manage your small business accounting and finances. DSD Business Systems is the leading provider of Sage Software’s Sage MAS 90, Sage MAS 200, Sage MAS 500 ERP , Sage BusinessWorks, and Sage CRM SalesLogix in Southern California. Our custom-tailored solutions allow growth-oriented companies to make better, more informed business decisions in less time. DSD offers the best support for Sage MAS 90 solutions and Sage MAS 90 & Sage MAS 200 enhancement products available. As a Sage Software Master Developer, DSD also provides custom programming and a catalog of Sage MAS90 enhancements, including Multi-Currency and Magnetic Media. With a professional approach and a dedication to integrity, DSD has assisted over a thousand companies in addressing their unique business software needs. DSD Business Systems provides a full range of consultation services for Sage MAS 90, Sage MAS 200, Sage MAS 500 and Sage CRM SalesLlogix programs. Our team of professionals will guide your company towards the right accounting and business software solution. It's another boring day! Dear hubby went to work today and we're stuck here. We even missed the church today because he left the house at 6:00 o'clock in the morning. He'll be home by 2:00 o'clock so we might to go the park when the weather cooperates. Yesterday morning, I received my first offer from B2P. I submitted my blogs 3 months ago but since my page rank was zero, they just kept it until I got PR2 and PR3. I thought wala nang chance yung blogs ko kasi nga walang page rank but I was thankful na accept na pala. So far, so good so I just want to take advantage on it :) This year has been a blessed one. So many blessings coming our way both physically and financially. You've been thinking about getting into precious metal investing? There may never be a better time for buying silver bullion than right now. World demand for silver now exceeds annual production, and has every year since 1990. Above ground stockpiles of silver bullion are low, shrinking rapidly and approaching zero. As an investment product, gold is available in coin or ingot form. Ingots are generally gold ingots of pure bullion cast in a convenient size and shape. Buying gold has been recognized for centuries as one of the best ways to preserve one’s wealth and purchasing power. Gold bullion is a unique investment. Also, silver bars represent an outstanding investment opportunity. So if you're into this kind of investments, I suggest you to purchase your precious metals through Monex Deposit Company (MDC). This company has been doing this kind of business for over 30 years with client transactions now totaling over $25 billion. Monex prides itself on having the best us silver coin prices and programs in the silver coin industry. With Monex, you can buy gold coins and other precious metals and have it delivered personally or arrange for a convenient and safe storage at an independent bank or depository. Monex Precious Metals is home to a large and dedicated staff of hard asset professionals committed to serving your precious metals investment needs and being America’s best dealer with a convenient market and competitive precious metals prices. Looking for inexpensive but high quality venetian blinds? Terry's Fabric is an online marketplace for textile fabrics for curtains, upholstery, soft furnishings, cushions and blinds. Also, stocks of curtain poles, curtain tracks and many more. They have the impressive range of discount fabrics available from Terry's Fabric warehouse are of superior quality. Buy wholesale fabrics online. Terry's Fabric Warehouse, where you will discover thousands of quality products available to buy online at the best value for money prices. They stock over 90% of the products they sell on their website for quick delivery, to see the full range of stock. For more information, feel free to browse their website and see discount fabrics in a designer light. I got this tag from Joyce. Thanks sis for thinking of me hehe. Our dear daughter will go to school soon and we are looking for some ways to improve her reading skills. A lot of parents are talking about Score, how it helps their children experience and progress. Score Learning centers help children ages 4 to 14 make significant academic progress in an innovative tutoring environment. So now, we are open to that option too. We are thinking that Score might be the easy way to go. We heard a lot of things how good Score can benefit students and their parents. Our dear daughter will possibly attend Score's innovative tutor program offered by Score. It's almost lunch time and I don't have idea what to cook. I'm too lazy to cook lol. Grabe, I didn't have this habit before I got pregnant. Ngayon lang sya. Hopefully it will be over soon. Okay, I'll better go and check our fridge for some left overs para yun nalang kainin namin for lunch. Most of us know that Orlando, Florida is best known as the home of Walt Disney World, also serves as the backdrop for high-end shopping areas, luxury resort hotels, cultural venues and other family-oriented activities. This popular region hosts six major theme parks, all close to a variety of Orlando hotels. If you and your family are taking a vacation to Florida and looking for Orlando Florida hotels, check out Orlando.com. They provides the most affordable prices on the widest selection of hotels, car rentals and flights throughout the greater Orlando Florida area. Orlando.com offers the most comprehensive travel and vacation arrangements in Orlando, with a travel guide offering tips on entertainment, dining, nightlife and more! They'll make sure you have a great time of your life. There's nothing much going on today, except that this morning, my dear hubby's lead boss called him because they want him to inspect some parts before they ship to their clients. Can you believe that? It was supposed to be his day off today but wala syang magawa since he's the only one who can do the job. They don't have other inspector there. He has to work until 2:30 and then by 3:30 we are going to a dinner fellowship tonight with the elders sa Outback Steakhouse. Meanwhile, dear daughter is now playing with her toys, while I was preparing for my Milo. Then, I have to fold those laundry before I change my mind lol. The really potent part of love is that it allows you to carry around beliefs about yourself that make you feel special, desirable, precious, innately good. Your lover couldn't have seen [these qualities] in you, even temporarily, if they weren't part of your essential being. At first, I was like what the heck is that? I never heard such a word before. Then, a friend of mine told me that Lap Band Surgery is a proven safe, viable weight loss alternative offering excellent weight loss results and people do it all the time. It is another type of weight loss surgery. For those people who live in Florida, you might want to check out Tampa lap band surgery from Journey Lite. It is a fast growing network of specialized surgical facilities, highly skilled and experienced bariatric surgeons and a team of health care professionals dedicated to providing the safest and least invasive surgical weight-loss solution and the most comprehensive support programs available today. Journey Lite specializes in Laparoscopic Adjustable Gastric Banding which is also known as LAGB or the LAP-BAND® System procedure. The LAGB or the LAP-BAND procedure offers a number of advantages over other weight-loss surgery options. This procedure is the least invasive weight-loss surgery available. It is adjustable and, if the need should arise, completely reversible. The LAP-BAND doesn’t require cutting, stapling or rerouting of the stomach or intestines and as such provides lower risks with the surgery and less serious long term complications. The procedure can be performed in an outpatient (day surgery) setting and offers a shortened recovery period. The highly skilled laparoscopic surgeons who practice at Journey Lite facilities are among the most experienced in the entire nation and our Journey Lite facilities are specially designed and equipped to meet the specific needs of the seriously overweight patient. Additionally, our highly competent staff is specially trained in caring for all the physical and emotional needs of the seriously overweight patient. Journey Lite commit themselves to providing you, the patient, with the personalized care and attention that you deserve while providing you the safest and least invasive weight-loss surgery available combined with our total follow-up care support system. Well, my Thursday wasn't as bad as I thought it would be. I feel a little bit better physically. Some of you might already know na pregnant ako. Few days ago, I didn't feel better because I always feel nauseous and all that. On top of it, I've lost 2.5 lbs because I didn't like the smell of certain foods. At lagi ako nasusuka. But I'm glad my appetite is getting back to normal now which is a good sign! Okay, it's time for another mega kit from Rakscraps. They'll give out another amazing mega kit brought to you by Rakscraps and donated by the Elements Team! Make sure you read this month's newsletter to get to know their fantastic sponsors, snag some great freebies and to get the buzz on everything going on this month! Don't forget to get your copies now! Would you let years go by between visits to the dentist? Of course not! Your dental health is just as important to your overall health as your dental health is to your general health. Dental care is very important. We make sure to visit our dentist every six months. But for many people, going to the dentist is an expensive chore especially those who doesnt' have dental insurance. And when you have dental insurance, it's pretty easy for most of us to visit our dentists. At Hayfield Dental Care has been serving the residents of Alexandria and the surrounding areas since 1987. All of their dentists have received advanced or specialty training which means they can perform almost any procedure without the need for a referral to another office. Charles Brown DDS has been employed by Hayfield Dental Care for over ten years. During that time Charles Brown DDS has performed literally thousands of crown, root canal and surgical procedures. Charles Brown DDS has a perfect record at the Virginia Board of Dentistry with a history of zero complaints. He graduated from the Medical College of Virginia and has received numerous awards. Charles Brown DDS was the recipient of the Academic Achievement Award for being ranked first in his class for the 1996 academic year. In 1997 Charles Brown DDS PC received the Quality Care Award and Resident of the Year award from UMMC. Charles Brown DDS is listed as one of the regions top dentists by the Washington Area Consumer Council and is a member of the ADA. Hayfield Dental Care’s doors are always open to new patients and emergencies. Keep it in mind, getting routine check-ups helps guard against problems. And not only will it keep you healthy, in the long run it will save you money too. Okay, dear hubby just bought Chinese foods for our dinner. I absolutely loved their pancit cantoon, fried prawns and the sweet and sour soup. Gosh, I can't believe I ate too much. After eating, I went directly to the couch and take a break lol. My stomach were hurting! I didn't notice nakatulog na pala ako. Dear hubby didn't wake me up until 8 pm at pinapunta na nya ako sa bedroom namin. He took care of Lilly while I was asleep. He's a real gentleman! Gosh, I was hoping to have a nice weather today so that dear daughter and I can go to the park but I didn't see the sun the whole morning. I can't believe this. We're in the mid May now and looks like parang di pa rin summer. Hay naku talaga nakakainis. But oh well, what can I do? Have you guys heard about GoFish.com? GoFish was created to help people put their videos in front of the world, as well as helping those with a little time on their hands find the best videos to watch. There are hundreds of thousands of people are creating amazing videos of all kinds, from the casual to the carefully constructed documentaries, comedies, spoofs, pranks, and even episodic dramas and many more. GoFish was created to give you a place to show off your skills while inspiring you to keep on creating. GoFish has a stunning selection of video clips for your amusement and viewing pleasure. The site makes finding, watching, and uploading videos as easy as you can imagine. GoFish.com is now have a contest currently running where you could win a date with a celebrity. Seduce A Celeb will run on GoFish.com over the next 14 weeks. You can check out the Free videos at GoFish.com for more details. The lucky winner can go on a date with gorgeous Mirelly Taylor. She has appeared on many movies. She appeared on Kiss Me Again and Serving Sara. She has even made appearances of hit television shows such as Las Vegas, Punk’d, and Numb3ers. So what are you waiting for? Who knows, this might be your chance to win a date with Mirelly Taylor. Oh my gosh, where's the sun? We've been looking forward to have a nice weather this week as we are going to Vancouver, Canada for a 2 day vacation starting on Friday, May 18th. Gosh, ang weird talaga ng weather dito sa Washington. Do you want to be on top of the major search engines? Search engine positioning is very important because it can increase web traffic by tremendous amount. At Customer Magnetism, they are there to help you out. They offer search engine positioning services. It is a proven search engine optimization services to improve your website ranking. Customer Magnetism is an internet marketing firm that generates top rankings. At Customer Magnetism, discover all your search engine ranking positions so you can better align your site for incredible search engine placement. Many of their clients have reported that their services generated a better return of investment than all of their other conventional forms of marketing such as direct mail, print ads and trade shows. The demand for high speed internet access, both at the office and at the home, continues to grow. There are currently over 200 million internet users within the U.S. alone and over one billion internet users worldwide. Customer Magnetism services include extensive key phrase research, seo copywriting, key phrase analysis, competitive back-link analysis, internal link structure and navigational improvement, strategic title meta tag adjustments, on-page optimization changes. They provide a variety of link building services such as submissions to directories who are known to generate direct click-through traffic and valid back links, article writing / submissions and press release writing/submissions. They also provide monthly reports (available online with unique password protection) to monitor your progress by tracking your rankings within the major search engines, monitoring your traffic, analyzing what search terms you are being found for, tracking your monthly Alexa ratings and tracking your current back link counts within Google, Yahoo and MSN. Once your initial term has ended and at no obligation, they offer renewal plans at a fraction of the initial price in efforts to continue to monitor and maintain your new rankings. They are there to be your long-term partner for success. I just woke up from a short nap and here I am again sitting in front of my computer. I'm trying to get some birthday gift for our inaanak. I was browsing Ebay website but I can't make up my mind on what to get. She will turn 1 year old this coming May 23rd. Any ideas or suggestions? 1. In a medium saucepan, heat oil over medium heat. Sauté ginger and garlic until fragrant. Add onions, stir-fry until softened and translucent. 2. Add chicken cuts. Cook for 3 to 5 minutes until chicken colors slightly. Season with patis and salt. 3. Pour in water (or rice water, if using). Bring to a boil. Lower the heat and let it simmer until chicken is half-done. Add in chayote (or papaya or potatoes). Continue simmering until chicken and vegetable are tender. Correct seasonings and then add sili leaves or malunggay or substitute. Stir to combine until well blended. Remove from heat. 4. Let stand for a few minutes to cook the green vegetables. Transfer to a serving dish and serve hot. The earliest celebration honoring mother’s dates back to the annual spring festival of ancient Greece dedicated to Rhea, the Mother of the Gods. The Greeks would pay tribute with honey-cakes and fine drinks and flowers at dawn. It is a day where we take some time to thank the person that brought us into this world and or cared for us when we were young and even when we are grownups. What better way to show your Mother you really care? At GourmetGiftBaskets, they know you care and that’s why you’re searching for the perfect Mother’s Day gifts. Gourmet gift baskets will give her memories that last until next Mother’s Day! The best Mother’s Day gifts you can give are our gourmet chocolates, fruits, wines, breads, cheeses and more. The beauty of their Mother's Day Gifts ideas are that you can customize your Mother’s Day baskets with the foods you know she loves. "Love comes to those who still hope even though they've been disappointed, to those who still believe even though they've been betrayed, to those for whom love still heals, even though they've been hurt before." We went to church at 10:45 in the morning and came out at 12:30. So we headed to a restaurant and have our lunch there. It was so busy so we had to wait for at least 15 minutes. We're all starving lol. We didn't eat anything when we left the house. We decided to go home after lunch kasi dear hubby wanted to cut the grass. It's like 10 inches long now. So, umuwi kami. When we arrived, he told me to go directly to our bedroom. So I went there and guess what? I saw this beautiful ring box in my drawer. He's so sweet. I didn't expect anything from him because he already bought me a gift two weeks ago. Okay, that's what I got this Mother's Day. It's a five round simulated diamonds are set in a resplendent row anniversary band of 14K yellow gold. Total simulated diamond weight is approximately 50 points. I absolutely loved it! To all mothers out there, happy Mother's Day! Enjoy the rest of the day! Are you looking for an inexpensive marine electronic products? Well, look no further. At Northeast Marine Electronics you will find electronics for camping, hiking, hunting, fishing, boating, and driving. Take a look at their extensive collection of standard horizon electronics when you get a moment, or their high-tech GPS chart plotters and Garmin Marine Electronics. They also have a large selection of electronic fishing tools like their Furuno Fish Finders and Garmin Marine Electronics. Let the enthusiastic and knowledgable staff help you choose the Discount Marine Electronics you need. Their selection of discount marine electronics is the best you will ever find. They carry anything you could ever need at sea like fishfinders, batteries, gps systems, radar, chart plotters, depth finders, binoculars, compasses, and instructional videos. All of their consumer marine electronics are sold at incomparable discounts from retail prices. They only sell materials of the absolute highest quality. Their site is easy to navigate and includes many necessary products. If you ever encounter any difficulty using their website or if you ever have any difficulties with a product, call and let them know. At NortheastMarineElectronics.com their consumer marine electronics are designed to give you peace of mind when you sail away from the shore. They only sell supplies that they know to be reliable. Their brands are some of the most trusted in the industry. They have products from Astron, Garmin, Standard Horizon, Raymarine, and equipment from many more manufacturers. they want everyone to take pleasure in fishing. Whether you make your living as an angler or fishing is your weekend calling, you will benefit from using quality materials. The discount marine electronics available at NortheastMarineElectronics.com are always astonishing. Whether you need radar, fishfinders, or gps systems, you’ll find the perfect products at their store. 1. Put bananas, milk and vanilla ice cream in the blender and blend till absolutely smooth. 2. Pour into banana shake into glasses and garnish with the cut bananas. I called my mother last night to greet her a happy Mother’s Day. I missed her so much. It’s been a while since I saw her. Hopefully she can come over her before the year end. Mom, you are thousands of miles away but you know how to make things bright. It’s as if you were sent to me, to help me through rough days and nights. You gave me your undying love, and I love you too. I will try to always be there as I know you will always there for me. We can confide in each other and make ourselves happy. Happy Mother’s Day! One of the easiest and least expensive ways to guard a home network from attack is to set up a Personal Firewall Software. Personal Firewall software provides you with security against hackers who try to access your PC when you are connected to the Internet. Software Security Solutions is a single resource for best-in-class Computer Security Software. They advocate a Layered Security Solution using different Internet and Computer Security Software products. Different types of Security Software excel at different types of protection and there is no “one-product-protects-all” thus their approach. Through research and testing they have found the best Security Software in each threat category including Anti Virus, Anti Spyware, Exploits, and Firewalls and put them all in one place. Customers find that this comprehensive approach to Internet Security saves them time and money. Essentially, Software Security Solutions has become their Virtual Security Consultant providing “self-service” computer security. A comprehensive product list and convenience, coupled with outstanding customer support has taken the mystery of Computer Security for consumers and small to mid-sized businesses. Well, started downloading this Korean drama last night and it's almost done. It's so funny because when dear hubby called, he said that it sounds funny. I'm not sure how it affects the downloading thing on the phone connection lol. But I'm so glad it's almost done. I can't wait to watch it. It looks interesting! Oh by the way, the title of the drama is "Goong" or Princess Hours just in case you might want to see it for yourself. Most of my friends recommend this drama kasi good daw. I watched Tagalog movie this afternoon entitled "Pangako Ikaw Lang" starring Aga Muhlach and Regine Velasquez. Well, the movie was made back in 2001 but I just have the time to watch it online today lol. The story is simple but Regine and Aga are really good in this movie. They bring out their characters and they have fantastic chemistry on screen. If you like chick flicks, this movie is for you. Vince (Aga Muhlach) is a womanizer while Cristina (Regine Velasquez), on the other hand, is a doting daughter. Vince gets into a serious car accident that sends him to the hospital where Cristina happens to be visiting her father who has lung cancer. Cristina goes into Vince's room and professes her love to this stranger when she unknowingly unplugs Vince's heart rate monitor. She runs away frightened until she randomly sees him again at a shopping mall. There they connect with each other and sparks fly. After a few more chance meetings, Cristina and Vince fall head over heels for each other - with one problem: Vince's girlfriend Crystelle. Eventually, Vince realizes that Cristina is the girl that he wants to be with and to be cliché, live happily ever after. With VoIP, you can make and receive phone calls via the Internet and save tons of money on long distance charges. One of the best benefits of a VoIP system is that calls placed over a private network are free. This is a vitally important consideration if your business has more than one office. With a private network connection between offices, it will cost you nothing to stay in touch on a near-constant basis. If you have been looking for a VoIP Small Business Phone Systems at an affordable price, look no further. At Xpander Communications, they specializes in small business phone systems for companies looking to improve and explore new technologies such as Voice Over IP (VoIP). Xpander delivers the most user friendly phone systems available thanks to a focus on simplicity, reduced costs, and drastically reduced maintenance. Small businesses with office branches scattered across the country can share the same phone system on the same private list of extensions across all branch offices equally. You can enjoy free 4-digit dialing between all branch, satellite, or home office locations making your team tighter and more efficient. Costs are drastically reduced thanks to unlimited free service and support as well as improved long distance and international calling plans. The benefits VoIP phone systems have to offer the small business world are endless. My designer Bannerwoman has a new kit called "Tee It Up. The kit is on sale now in Scrapdish.com. So don't forget to get your copy now! This kit is perfect for the avid golfer in your life, or even th amature on the mini-golf circut! There is LOADS of fun elements in this kit! Perfect for the male gender as well, without the golf themed elements! You'll love how everything mixes and matches! Adore your photos with this fun kit, and show off to "Hubbie" that you do care about HIS game! The kit includes 8 Papers, 1 brad, 1 buckle, 2 buttons, 1 Cart Sticker, 2 corners, 1 Divot Marker, 1 Dotted Trail, 1 Flag, 1 Frame, Golfball, Scorecard, club, hat, 2 labels, 1 name plate, 2 paperclips, 3 ribbons, 1 staple, 2 worded stickers, 1 golf tag, 1 diamond tag, 2 tee's, 1 photo turn. 1. Wash cabbage well and drain. Saute in shortening the garlic, onion, pork, and tomatoes. 2. Add shrimps juice. Stir until it boils. Add the sliced cabbage and season to taste. Cook until cabbage is crisp-tender. Looking for a high quality and affordable Vacation homes in Orlando? If you are looking for a great place to spend Christmas, Spring Break, or even your summer break, I would suggest you to check out Orlando Vacation. It is you one stop shop to plan your next Orlando Vacation. You can compare rates on Orlando hotels and vacation packages near Disney World Orlando or if you need more space browse through vacation home rentals. They have beaches within a short 40 minute drive, and outstanding resorts and vacation homes to stay in for less than most other popular destination places. They have it all in Orlando. They have more top rated golf courses within a 40 minute drive than anywhere else in Florida. They also have the second largest convention center in the world, and since Disney World is their biggest draw almost every recreation in Orlando is family friendly. With the ever competitive theme park business most all the area theme parks are continuously adding new rides and new attractions as there is stiff competition among all the theme parks. These constant upgrades to the area attractions will continue to make Orlando a great destination for the entire family. Dear hubby went to a Filipino store last Sunday and he bought "Ampalayas" and "Pork Hamonado Longanisa". So I made them today for our lunch. Very yummy! I can't believe I ate the whole thing. I'm like a pig now...lol. Hurray! I won a $10 GC from Nyree. She did a raffle for her birthday. I'll pray that she will have more birthdays to come and more blessings coming her way! Again, belated happy birthday sis Nyree! One of my biggest dream is to go to Florida with my family and be able to see Universal Studios and other beautiful places there. For more than 85 years, Universal Studios has been bringing unique entertainment experiences to millions of people around the world. Also, when traveling to Orlando, most people also visit Universal Studios and Islands of Adventure. For Night-time fun, dinner shows in Orlando are very popular. If you and your family are planning to go to Universal Studios, then you might want to check out Orlandofuntickets.com and get your Universal Studios Tickets. It is the ultimate source for discount tickets for Disney World and all Orlando theme parks, dinner shows and attractions. When planning your vacation to Orlando, the most expensive and intimidating factors in your planning process are the endless options of admission tickets. Disney tickets are now available for many different days and options from one to ten days for what is called, “Disney Magic Your Way”. OrlandoFunTickets.com has the lowest prices on all discount Disney tickets and guarantees the lowest prices. Walt Disney World has so much to offer with four theme parks, two water parks and much, much more. They have discounted tickets to everything Disney has to offer including Disneyquest, Downtown Disney Pleasure Island, Disney’s Typhoon Lagoon and Blizzard Beach Water Parks, as well as every Disney theme park ticket that is available. OrlandoFunTickets.com has the lowest discount prices for all Orlando dinner shows including Arabian Nights discount tickets, Medieval Times discount tickets, Pirates Dinner Adventure discount tickets, as well as all of the other major theme parks including Sea World tickets, Kennedy Space Center tickets, Universal Studios tickets, and more. I'm craving for something bitter today so I decided to cook Pinakbet. Naku very yummy talaga! 1. Cut the ampalaya and the eggplant into 4 pieces each. Place in a saucepan, then put the okra, the tomatoes, the onion, and the ginger. 2. Add the soy sauce. Boil. 3. Add 2 tablespoons oil, salt and vetsin to season, or you can use bouillon instead . Cook until vegetables are tender but not overcooked. To those of you who tagged me, nagawa ko na ang tag at nasa isa kung blog. Pasensya na medyo natagalan kasi medyo busy ang beauty ko these past few days! Looking for a high quality designer dog beds? At Bling Bling Puppy, they have huge selection at low prices! The perfect pet supplies to keep your pets healthy and energetic. Bling Bling Puppy is a luxury dog boutique that specializes in turning a plain puppy into a diva dog. The days are long gone when the best you could find to pamper your dog was a square, boring pillow, and a plain collar and leash. Now the possibilities are endless. At Bling Bling Puppy you can find the finest in designer dog beds, which range from chaise lounges, daybeds, and much more. These can make a great match to your existing home décor. They have chic leather dog carriers, and crystal dog collars made from the finest quality materials, as well as other exquisite varieties to choose from. They have a fine assortment of grooming and spa products, custom dog houses, all natural dog supplements, exquisite dog jewelry, and toys. They also have an extensive list of dog names as well. You can also find tips and advise from health tips on proper supplementation, to getting the right bed for your dog. They have all the right accessories to add a touch of bling to your dog, and make them the envy of the entire neighborhood. We are happy to announce and welcome our new addition to the family! As some of you might know, we're been trying to get pregnant for 7 months and finally all our hopes and dreams finally came true! Last week, I didn't like the smell of garlic and every time I smelled it, nasusuka ako. Then it goes for a week. But we didn't expect anything. I just thought it might be because of my sinusitis. But dear hubby was kinda suspicious because I still not get my monthly period. I was supposed to have it on March 27 but wala pa din until today. So he bought a pregnancy test today and you guess it right! I'm pregnant with our second baby! I did a pregnancy test later today and it was positive! Ang saya saya! Next week, we are going to see the doctor to confirm the news! Posted by Nita | Permalink | 13 Comments | Feeling Lucky? My other designer Rebecca Lynn has a new kit called "Jackie's Little Star" available at Plain Digital Wrapper. Don't forget to get your copy now! This magical kit is perfect for scrapping the little star in your life. Brightly colored papers and co-ordinating elements include 8 papers, 3 tied ribbons, 3 grosgrain ribbons, 1 star sticker frame, 1 star sticker border, 3 word sticker sayings, 1 stick pin, 1 piece of paper, 1 tag, 1 staple, 2 brads, 2 star buttons, 1 shooting star, embroidered patch, 1 swirly doodle, and 1 paperclip. Does your company need a helping hand to fund your business? If you are searching for a reliable company to obtain funding for your business, you've come to the right place. At Venture Alliance Partners (Venpar) is one of the leading providers of private equity, dedicated to helping entrepreneurs and investors build world-class companies. Venpar.com was founded in 2006 and have already managed to show excellent results for their clients, which is not surprisingly the core reason behind our list of partners and strong portfolio. Their continued focus on the venture capital marketplace, strong and trustworthy determination for being an external financial strategist, headhunter, investment banker and corporate therapist for a wide range of different companies in continuous growth, remains key to their success. Mother’s Day is your once-a-year opportunity to express your appreciation for everything she’s done in your life. If you are looking for the perfect Mother's Day gift for your wife or mother, then look no further! At FlowerShop.com, they are dedicated to providing their customers with fresh flowers and unique gifts to express their thoughts and feelings. You can choose from a huge assortment of Mother’s Day flowers, luxury gifts such as robes and slippers, or gourmet treats and gift baskets. Most of their products are hand-delivered by local florists, and can be delivered same-day. They are experts in gift baskets, gourmet baskets, corporate gifts, plants, flowers and all items that you would find in a flower shop. You’ll also find unique gifts that wouldn’t be typically sold by florists, such as chocolate-covered berries, melt-in-your-mouth brownies, and gourmet nuts and dried fruits. They are a FTD and Teleflora member florist, so you can count on us to provide you with a 100% satisfaction guarantee. If you're not sure what to send, you can find information about flowers and gift-giving at the FlowerShop.com blog. Flowershop.com is a secure site. They've taken all the necessary steps to create a secure environment for credit card transactions. Eighty-three percent of people like to receive flowers unexpectedly. Shop FlowerShop.com today and make someone smile! Flowershop.com is a family-owned floral company with almost 35 years of experience in the flower industry. Last week, we went to a Korean store and bought Pork Legs there. So tonight, I'll make Crispy Pata. Men, I loved Crispy Pata so much. 1. Place the pata is a casserole and cover with water. Add the whole garlic, onion, peppercorns and bay leaf. Season with plenty of salt. Set over high heat and bring to a boil, skimming off scum as it rises. Lower the heat, cover and simmer for an hour to an hour and a half, or until tender. Alternatively, pressure-cook for 30 to 45 minutes from the time the valve starts to turn. 2. Remove the pata from the broth, draining well. Cool. If you have the time, wrap in foil of cling film and place in the freezer for thirty minutes. 3. Heat the cooking oil in a wok or deep fryer until it starts to smoke. Gently lower the pata in the hot oil. The oil will spatter, no doubt about that. It is best to immediately cover the wok or fryer. Make sure that the cover has a steam valve to allow the hot steam to escape and to prevent it from condensing back into the oil. Cook the pata until the rind is puffed and golden. I don't feel like doing anything today. I just want to sit in front of my computer and checked my emails. I'm already done with my assignments. Hurray! Ever dream of traveling to Aruba? Aruba is a delightful slice of paradise in the heart of the Dutch Caribbean. It is noted for its miles of soft, white sand beaches lined with palm trees and resorts. Aruba offers a wide variety of water sports, duty-free shopping, exquisite dining, exciting casinos and nightlife. Relax on pristine white beaches that rank among the most beautiful in the world. If you are planning a vacation to Aruba, plan your trip and book your next Aruba vacation at Vacations.net. Vacations.net is the leader in the all-inclusive travel experience, launches a newly redesigned website with a customized booking engine and intuitive functionality that sets it above its competitors. Vacations.net has created a website that raises the bar and sets a new standard for hospitality websites catering to the high-end while offering great rates, tremendous savings up to 50% off regular rates, and more! You can save on all-inclusive resort vacations begin at Vacations.net. View dramatic images along with detailed information on customs, culture, history and activities put the customer right in their destination. The customized booking engine is intuitive, the content is personable as well as thorough, the images are compelling and most importantly, the site is easy to use. The booking engine functionality rivals that of global online travel sites with the ability to price dynamically, to offer dollar and percentage off promotions, to discount regular rates, to save with free nights programs, and to provide value added packages. Vacations.net aims to provide an all-inclusive online experience to travelers seeking the perfect all-inclusive resort. It features all-inclusive resorts in popular beach and sun destinations, including Dominican Republic, Jamaica, and Mexico. Savvy travelers seeking to discover a new all-inclusive resort or to find a great rate on a long-time favorite vacation destination do not need to look any further than Vacations.net, where paradise is just a click away. I went to Ebay website and found a beautiful beads jewelry and I'm itching to get it. But I have to ask my dear hubby's permission first. I'm a good wife, yah know! I love beads! They're so pretty.
2019-04-22T04:48:22Z
http://nita76.blogspot.com/2007/05/
Diabetic heart disease is a distinct clinical entity that can progress to heart failure and sudden death. However, the mechanisms responsible for the alterations in excitation-contraction coupling leading to cardiac dysfunction during diabetes are not well known. Hyperglycemia, the landmark of diabetes, leads to the formation of advanced glycation end products (AGEs) on long-lived proteins, including sarcoplasmic reticulum (SR) Ca2+ regulatory proteins. However, their pathogenic role on SR Ca2+ handling in cardiac myocytes is unknown. Therefore, we investigated whether an AGE cross-link breaker could prevent the alterations in SR Ca2+ cycling that lead to in vivo cardiac dysfunction during diabetes. Streptozotocin-induced diabetic rats were treated with alagebrium chloride (ALT-711) for 8 weeks and compared to age-matched placebo-treated diabetic rats and healthy rats. Cardiac function was assessed by echocardiographic examination. Ventricular myocytes were isolated to assess SR Ca2+ cycling by confocal imaging and quantitative Western blots. Diabetes resulted in in vivo cardiac dysfunction and ALT-711 therapy partially alleviated diastolic dysfunction by decreasing isovolumetric relaxation time and myocardial performance index (MPI) (by 27 and 41% vs. untreated diabetic rats, respectively, P < 0.05). In cardiac myocytes, diabetes-induced prolongation of cytosolic Ca2+ transient clearance by 43% and decreased SR Ca2+ load by 25% (P < 0.05); these parameters were partially improved after ALT-711 therapy. SERCA2a and RyR2 protein expression was significantly decreased in the myocardium of untreated diabetic rats (by 64 and 36% vs. controls, respectively, P < 0.05), but preserved in the treated diabetic group compared to controls. Collectively, our results suggest that, in a model of type 1 diabetes, AGE accumulation primarily impairs SR Ca2+ reuptake in cardiac myocytes and that long-term treatment with an AGE cross-link breaker partially normalized SR Ca2+ handling and improved diabetic cardiomyopathy. Diabetes has become an epidemic disease and it is estimated that by the year 2025, it will affect over 300 million people worldwide (Amos et al., 1997; Boudina and Abel, 2007). In the United States alone, about 8% of the population is affected by diabetes and approximately one million of those people suffer from insulin-dependent (type 1) diabetes. Type 1 diabetes is characterized by sustained hyperglycemia resulting from the loss of insulin-producing pancreatic beta cells. This loss in insulin production results in dysfunctional glucose uptake in insulin-sensitive tissues (e.g., striated muscle) and causes multiple-organ complications. Of importance, diabetes is also a common cause of cardiovascular diseases. Within the past 30 years, diabetic cardiomyopathy has been identified as its own clinical unit, independent of coronary artery disease and atherosclerosis (Fang et al., 2004; Poornima et al., 2006). Ventricular diastolic dysfunction is the first stage of diabetic cardiomyopathy and has been reported in about 50% of asymptomatic patients (Fang et al., 2004; Lacombe et al., 2007). Because intracellular calcium (Ca2+) homeostasis is crucial for excitation-contraction coupling, chronic diabetes mellitus has been associated with impaired cardiac contractility and relaxation of the myocardium due to altered Ca2+ homeostasis (Lagadic-Gossmann et al., 1996; Pierce and Russell, 1997; Netticadan et al., 2001; Zhong et al., 2001; Choi et al., 2002; Fang et al., 2004; Lacombe et al., 2007). However, the exact mechanisms for this impaired Ca2+ homeostasis and the specific therapeutic strategies for this patient population remain elusive. The sarcoplasmic reticulum (SR) functions as the main regulator of intracellular Ca2+ and is a major determinant of cardiac contraction and relaxation (Bers, 2002). Ca2+ entry through the L-type Ca2+ channel activates Ca2+ release from the SR. The SR Ca2+ release channels, the ryanodine receptors (RyRs), release the majority of free Ca2+ necessary for contraction, and SR Ca2+ ATPase (SERCA2a) pumps sequester the majority of Ca2+ during relaxation of cardiac myocytes. Several groups (including ours) have reported decreased expression of these SR Ca2+ regulatory proteins in type 1 diabetic rats with cardiac dysfunction (Poornima et al., 2006; Lacombe et al., 2007; Ratnadeep et al., 2009). Furthermore, impaired excitation-contraction coupling in diabetic myocytes has been characterized by slower Ca2+ transient decays and cytosolic Ca2+ overload during the diastolic phase (Pierce and Russell, 1997; Choi et al., 2002; Lacombe et al., 2007). However, the mechanisms by which SR Ca2+ cycling is impaired during diabetic cardiomyopathy have not been fully elucidated. Chronic hyperglycemia, the hallmark of diabetes, accelerates the reaction between glucose and proteins and leads to the formation of advanced glycation end products (AGEs). These AGEs form irreversible cross-links throughout the lifetime of many large proteins (such as collagen and hemoglobin), covalently modifying their structure and function (Cooper, 2004). Therefore, AGEs induce myocardial fibrosis and stiffness leading to severe cardiac dysfunction (Norton et al., 1996; Asif et al., 2000; Vaitkevicius et al., 2001; Aronson, 2003; Candido et al., 2003; Bakris et al., 2004; Cooper, 2004; Hartog et al., 2007; Ma et al., 2009). In addition, Bidasee et al. (2003, 2004) have demonstrated the presence of cross-linked AGEs on long-lived intracellular cardiac SR proteins such as the SERCA2a pump and RyR2 after a few weeks of diabetes. Therefore, one could hypothesize that the post-translational modifications of the SR proteins by AGEs could lead to an alteration in Ca2+ homeostasis. However, the functional significance of AGEs on SR Ca2+ regulatory proteins in cardiac myocytes and thus on excitation-contraction coupling has not been determined. Our hypothesis was that treatment with an antiglycation therapeutic agent, dimethyl-3-phenacylthiazolium chloride (alagebrium chloride or ALT-711), which chemically breaks AGE cross-links, will normalize SR Ca2+ reuptake in cardiac myocytes and therefore improve diastolic function in type 1 diabetes. Eight-week-old male Wistar rats were randomly divided into 3 groups (n = 11/group): untreated age-matched control group (CON); untreated diabetic group (DX); and ALT-711 (Shanghai Inc., China) treated diabetic group (DX-ALT). Diabetes was induced at 10 weeks of age in DX and DX-ALT groups by a single injection of streptozotocin (STZ, 50 mg/kg IP diluted in 1 mL citrate buffer). The control group received similar volume of vehicle. One diabetic group received dimethyl-3-phenacylthiazolium chloride (ALT-711, 10 mg/kg per day in the drinking water) for 8 weeks. This therapeutic dose has been previously shown to significantly reduce cardiac AGE level in STZ-induced diabetes (Candido et al., 2003). The volume of ALT-711 delivered in the drinking water was calculated based on the individual water consumption, which was measured every other day. To confirm the status of diabetes, venous blood samples were drawn from the tail vein for measurement of blood glucose concentration using a glucometer (BD Logic) at baseline and then weekly after STZ injection for 8 weeks. Animals were weighed once a week, as a means to monitor their clinical condition. This animal protocol was approved by the Ohio State University Institutional Animal Care and Use Committee. Transthoracic echocardiographic examination was performed to assess systolic and diastolic function at baseline and 8 weeks after the induction of diabetes. Two-dimensional, M-mode, and pulsed-wave Doppler imaging were obtained in rats lightly anesthetized with isoflurane (minimal effective concentration), and placed on a heating table to maintain normothermia. Examinations were done using a high-resolution high-frequency digital imaging system with a 21 MHz linear-array transducer and simultaneous ECG recording (Vevo 2100, VisualSonics, Toronto, Canada), following standard techniques as previously described (Dirksen et al., 2007; Lacombe et al., 2007, 2010; Ware et al., 2011). Standard parasternal long- and short-axis views (6–8/rat) were obtained during each echocardiographic examination. Ventricular structure and function were assessed by two-dimensional cine loops of a long-axis view (with frame rates of at least 200 frames/s) and of a short-axis view at mid-level of the papillary muscles, as well as M-mode loops of the short-axis view. Thicknesses of the interventricular septum and of the left ventricular posterior wall, and left ventricular internal diameter (LVID) were measured in systole and diastole from the short-axis view according to standard procedures. Left ventricular (LV) ejection fraction (EF), a surrogate of systolic function, was calculated, as follows: EF = (LVID end-diastolic – LVID end-systolic/LVID end-diastolic) × 100%. The apical four-chamber view was used for color flow guided, pulsed-wave Doppler imaging of transmitral flow and LV outflow. The myocardial performance index (MPI or TEI index) was obtained from the sum of the LV isovolumic relaxation time and isovolumic contraction time divided by the aortic ejection time, parameters which were measured from the pulsed-wave Doppler imaging of transmitral flow and LV outflow. Echocardiographic image measurements were performed offline. All image acquisitions and offline measurements were conducted by the same investigator (AK). Average values were obtained from the measurement of three cardiac cycles from one cine loop. LV fibrosis was measured at 8 weeks after the induction of diabetes by the Ohio State University's Core Pathology laboratory. LV cross sections were washed with PBS, fixed using OCT (optimal cutting temperature) compound, frozen in dry ice and stained with Masson Trichrome staining. Following echocardiographic measurements, animals were euthanized by pentobarbital sodium. The heart was removed and perfused in a retrograded manner, using a Langendorff apparatus with tyrode buffer (37°C, pH = 7.35 and oxygenated with 95% O2 and 5% CO2), which contained (in mM): NaCl (135), KCl (5.4), MgCl2 (1), NaH2PO4 (0.33), Hepes (10), glucose (10), and CaCl2 (1). This initial perfusion was followed by a perfusion with tyrode buffer without any CaCl2. Subsequently, collagenase (type II, Worthington Biochemical, 1 mg/ml) was added to the calcium free tyrode buffer and recirculated for the rest of the perfusion period. When the heart was soft, the ventricles were minced and the cells were subsequently washed in tyrode solution containing CaCl2 (1). Only rod-shaped cells with sharp margins and clear striations were included in the study. All recordings were made within 5 h of isolation (Dirksen et al., 2007; Lacombe et al., 2007, 2010). Ca2+ transient was measured in fluo-3-loaded cardiac myocytes with confocal Ca2+ imaging as previously described; for measurements of Ca2+ transients and transient decay, mean area under the curve was calculated (Kubalova et al., 2005; Dirksen et al., 2007; Lacombe et al., 2007, 2010). Rapid applications of caffeine (10 mM) were used to measure SR Ca2+ content by measuring the peak amplitude of the caffeine-induced Ca2+ transients. Intracellular Ca2+ imaging was performed using a Laser Scanning Confocal System (Olympus Fluoview 1000 confocal microscope interfaced to an IX-70 inverted microscope and equipped with an 60 × 1.4 NA oil objective). Fluo-3 was excited by the 488- nm beam of an argon-ion laser, and the fluorescence was acquired at wavelengths > 515 nm in the line scan mode, at a rate of 2 or 6 ms per scan. The magnitude of fluorescent signals was quantified in terms of F/F0, where F0 is baseline fluorescence (Kubalova et al., 2005; Dirksen et al., 2007; Lacombe et al., 2007, 2010). LV myocardium was collected 8 weeks after the induction of diabetes. Crude membrane homogenates were prepared for Western blot analysis, as previously described (Meurs et al., 2006; Lacombe et al., 2007; Ware et al., 2011). Proteins were subjected to sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) electrophoresis, electrophoretically transferred to PVDF membranes using a trans-blot cell (Bio-Rad Laboratories, Hercules, CA, USA; Meurs et al., 2006; Lacombe et al., 2007; Ware et al., 2011). Samples from the 3 groups were loaded on the same gel to ensure equal blotting conditions for each group. Membrane proteins were incubated with mouse RyR2 or SERCA2a antibodies (1:3000 and 1:1000 dilution, respectively, Affinity Bioreagents), and subsequently with the appropriate secondary antibodies conjugated to horseradish peroxidase (1:50,000 dilution, Jackson ImmunoResearch Laboratories; 1:5000 dilution, Sigma Aldrich, respectively). Quantitative determination of protein was performed by autoradiography after revealing the antibody-bound protein by enhanced chemiluminescence reaction. The data were normalized to actin or calsequestrin, previously quantified by reprobing each membrane with calsequestrin polyclonal IgG (Calbiochem) or Actin monoclonal IgG (Sigma Aldrich), respectively. A Two-Way ANOVA (treatment and time factors) for the in vivo measurements, and a one-way analysis of variance (treatment factor) for the in vitro measurements were performed, as appropriate. Data were reported as means ± SE. Statistical significance was defined as P < 0.05. As expected, the STZ-treated rats exhibited hyperglycemia within 72 h post injection, which persisted during the 8-week experimental period (P < 0.05, Figure 1). Treatment with an AGE cross-link breaker, ALT-711, for 8 weeks did not significantly alter blood glucose concentration compared with untreated diabetic rats. In addition, the diabetic rats had a significantly lower body weight when compared to the control group and there was a tendency (P < 0.1) for the treated diabetic rats to have a higher body weight than the untreated diabetic rats at 4 and 8 weeks after the induction of diabetes (Figure 2). Figure 1. Blood glucose was significantly increased in DX and DX-ALT groups compared to age-matched control group. Mean ± SE of blood glucose concentration in control (CON), untreated diabetic (DX), treated diabetic (DX-ALT) rats at baseline (time 0) and up to 8 weeks after the induction of diabetes. n = 10 − 11/group. *P < 0.05 when comparing values from age-matched controls. Figure 2. Body weight was significantly decreased in DX and DX-ALT groups compared to age-matched control group. Mean ± SE of body weight in control (CON), untreated diabetic (DX), treated diabetic (DX-ALT) rats at baseline (time 0) and 8 weeks after the induction of diabetes. n = 10 − 11/group. *P < 0.05 when comparing values from age-matched controls. †P < 0.1 when comparing values from DX group. We then evaluated the effect of diabetes and ALT-711 therapy on systolic and diastolic function by echocardiographic examination in (treated and untreated) diabetic and control groups. EF, a surrogate of systolic function, was mildly decreased at 8 weeks after the induction of diabetes compared to baseline values (Table 1). In addition, the isovolumic relaxation time and the MPI, two parameters of LV relaxation, were significantly increased in untreated diabetic rats compared to the age-matched control groups (Table 2). ALT-711 therapy did not significantly alter EF of the diabetic myocardium but blunted the increase in isovolumic relaxation time and the MPI in diabetic animals, suggesting that ALT-711 therapy partially prevented cardiac dysfunction by primarily improving diastolic dysfunction (Figure 3 and Table 2). Figure 3. ALT-711 therapy partially alleviated diabetic cardiomyopathy by primarily improving Doppler-derived parameters of diastolic function. (A) A representative paired M-mode echocardiograms (top panel) and transmitral Doppler flow (bottom panel) at 8 weeks of treatment in control (Con), untreated diabetic (DX), and treated diabetic (DX-ALT) groups. IVS, interventricular septum; LVPW, left ventricular posterior wall; RV, right ventricle; IVRT: isovolumic relaxation time. (B) Mean ± SE of percentage change after 8 weeks of treatment compared to baseline value for ejection fraction (EF) of the left ventricle (LV) in control (Con), untreated diabetic (DX), and treated diabetic (DX-ALT) groups. (C) Mean ± SE of percentage change at 8 weeks compared to baseline value for isovolumic relaxation time (IVRT) of the left ventricle in control (Con), untreated diabetic (DX), and treated diabetic (DX-ALT) groups. (D) Mean ± SE of percentage change at 8 weeks compared to baseline value for myocardial performance index (MPI) in control (Con), untreated diabetic (DX), and treated diabetic (DX-ALT) groups. n = 9 − 11/group, *P < 0.05 when comparing values from age-matched controls. †P < 0.05 when comparing values from DX group. Table 1. Parameters derived from M-mode echocardiography of LV for age-matched control (CON), untreated diabetic (DX), treated diabetic (DX-ALT) groups at baseline and at 8 weeks after the induction of diabetes. Table 2. Doppler-derived parameters of diastolic function for age-matched control (CON), untreated diabetic (DX), treated diabetic (DX-ALT) groups at baseline and at 8 weeks after the induction of diabetes. Because diastolic dysfunction may be due to reduced rate of sequestration of Ca2+ into the SR or to histological changes in myocardium rendering it less compliant, hearts were stained with Masson's trichrome to quantify fibrosis. Masson trichrome staining did not reveal significant fibrosis of the left ventricle in (treated and untreated) diabetic rats compared with the control group, suggesting primarily an impaired ventricular relaxation rather than increased myocardial stiffness in our diabetic model (Figure 4). Figure 4. Lack of fibrosis in ventricle of (untreated and treated) diabetic rats. (A) Masson Trichrome staining showing fibrosis (Fibrosis Control experiment). (B–D) Representative Trichrome staining (×20 objective) demonstrating the lack of ventricle fibrosis in age-matched control (Con), untreated diabetic (DX), and treated diabetic (DX-ALT) groups. Because AGEs accumulate on SR Ca2+ regulatory proteins and could alter their function leading to diabetic cardiomyopathy, we further determined if treatment with an AGE cross-link breaker would improve SR Ca2+ handling during diabetic cardiomyopathy by measuring Ca2+ transient in fluo-3-loaded cardiac myocytes with confocal Ca2+ imaging. There was a significant (P < 0.001) decrease in Ca2+ transient amplitude in isolated cardiac myocytes of (untreated and treated) diabetic rats when compared with controls (Figure 5). In addition, the Ca2+ transient decay during the diastolic phase was significantly prolonged in diabetic compared with control myocytes (Figure 5). ALT-711 therapy resulted in shortening of the Ca2+ transient decay, suggesting an improvement in SR Ca2+ reuptake in treated diabetic myocytes. In addition, SR Ca2+ load, measured by caffeine-evoked Ca2+ transient amplitudes, was significantly decreased in myocytes from untreated diabetic, but not from treated diabetic rats, compared with control groups (Figure 5), suggesting that ALT-711 therapy partially attenuated SR Ca2+ content depletion in diabetic myocytes. Figure 5. ALT-711 therapy ablated the prolongation of Ca2+ transient decay in diabetic cardiac myocytes. (A) Representative confocal line scan images of Ca2+ transient along with their spatial averages in myocytes from age-matched control (CON, left), untreated diabetic (DX, middle), and treated diabetic (DX-ALT, right) rats. F0, diastolic fluorescence. (B) Mean ± SE of Ca2+ transient amplitude (F/F0) for CON, DX, and DX-ALT rats, n = 43 − 44/group. (C) Mean ± SE of the time constant (τ) of Ca2+ transient decay in CON, DX, and DX-ALT rats. n = 40 ± 4/group. τ decay is significantly increased in DX compared to CON (P < 0.05). Please note a significant decrease in DX-ALT compared to DX, showing an improvement in calcium reuptake time after 8 weeks of ALT-711 treatment. (D) Caffeine-induced Ca2+ transient amplitudes (mean ± SE) were reduced in myocytes from diabetic compared with control and treated diabetic rats. n = 4 − 5/group, *P < 0.05 when comparing values from age-matched control myocytes. †P < 0.05 when comparing values from DX group. To determine if AGE accumulation could alter the expression of SR Ca2+ regulatory protein, we performed quantitative immunoblot analysis of SERCA2a and RyR2. SERCA2a pump expression was significantly decreased in the myocardium of untreated diabetic, but not from treated diabetic rats, compared with control groups. We also observed a decrease in RyR2 protein expression in the myocardium of diabetic rats when compared to controls, while the diabetic treated group exhibited similar RyR2 protein levels compared with controls (Figure 6). Figure 6. ALT-711 therapy partially attenuated the decreased expression of SR Ca2+ regulatory proteins in the diabetic myocardium. (A) Top panel: representative immunoblot of sarco(endo)plasmic reticulum Ca2+-ATPase (SERCA2a) and calsequestrin (loading control) expression in the myocardium of control (CON), untreated diabetic (DX), treated diabetic (DX-ALT) groups; samples were from the same membrane, which was reprobed for calsequestrin (the loading control). Within the same gel, the reassembly of noncontiguous lanes has been demarcated by white spaces. Bottom panel: normalized optical density (OD; relative to calsequestrin) of SERCA2a protein content was significantly decreased in DX, but not in DX-ALT, hearts compared to control group. Data are mean ± SE for n = 4 − 5/group. (B) Top panel: representative immunoblot of Ryanodine Receptor (RyR2) protein expression in the myocardium of control (CON), untreated diabetic (DX), treated diabetic (DX-ALT) hearts (samples were from the same membrane, which was reprobed for actin, the loading control. Within the same gel, the reassembly of noncontiguous lanes has been demarcated by white spaces). Bottom panel: normalized optical density (relative to actin, loading control) of RyR2 protein was significantly decreased in DX, but unchanged in DX-ALT, hearts compared to control group. Data are mean ± SE for n = 5/group. *P < 0.05 when comparing values from age-matched control heart. The major finding of this study was that long-term treatment with ALT-711, an AGE cross-link breaker, partially restored SR Ca2+ handling in cardiac myocytes, by primarily improving Ca2+ transient decay compared to the untreated diabetic rats. As a result, ALT-711 therapy partially prevented in vivo diastolic dysfunction in the diabetic myocardium of a rodent model of type 1 diabetes. The STZ diabetic rat model is a well-established model to study insulin-dependent (type 1) diabetes. STZ contains a glucose molecule with a highly reactive nitrosourea side chain, which initiates a specific cytotoxic action on the pancreatic β-cell. A few weeks after STZ injection, rodents develop biochemical and functional myocardial abnormalities, which are the result of chronic hyperglycemia rather than a direct effect of the drug itself. Therefore, diabetic rodents display clinical signs (hyperglycemia, polydipsia, glycosuria, and polyuria) and cardiovascular complications similar to those in human diabetic patients. Since a close relationship between the STZ dose and the severity of diabetes has been demonstrated and since other parameters (such as animal strain, frequency or route of injection, or preparation of STZ, and duration of diabetes) all significantly influence the severity of the model, we previously established in our laboratory a protocol (using a low dose of STZ) to induce a mild form of diabetes and to mimic the early metabolic and cardiac events that occur in diabetic subjects (Lacombe et al., 2007). As a result, the mortality rate was less than 5% and all the rats became diabetic. Importantly, these diabetic rats develop primarily mild diastolic dysfunction followed by mild systolic dysfunction, have prolonged QTs and action potential durations and are prone to arrhythmias (Lacombe et al., 2007, 2010). In addition, this animal model is somewhat relevant to non-insulin-dependent diabetic (or type 2) subjects, who also develop diabetic cardiomyopathy (Boudina and Abel, 2007). Indeed, while initially there is insulin resistance in type 2 diabetes, as the disease progresses there is also insulin deficiency secondary to the exhaustion of pancreatic beta cells (which have produced large amounts of insulin to compensate for the insulin resistance). Diabetic heart disease, also referred to as diabetic cardiomyopathy, is a major cause of cardiovascular diseases in the United States today. It can lead to heart failure and sudden death, killing ~65% of the patient population (Choi et al., 2002). The presence of LV diastolic dysfunction is an early complication of diabetes and is the first stage in the development of diabetic cardiomyopathy (Fang et al., 2004; Lacombe et al., 2007). Diastolic dysfunction refers to mechanical and functional abnormalities such as impairment of diastolic distensibility, filling, or relaxation of the left ventricle (Aurigemma et al., 2006). The incidence of diastolic dysfunction has been underestimated until the recent advancement of non-invasive imaging tools of cardiac relaxation, such as Doppler flow and tissue Doppler imaging. In particular, the MPI is a Doppler-derived parameter independent of blood pressure and load. MPI increases with worsening of LV diastolic dysfunction, even during the early stages of subclinical diastolic dysfunction (Su et al., 2006). Early determination of this myocardial manifestation of diabetes is of major importance, since subclinical diastolic dysfunction contributes to a four to eightfold increase in risk for congestive heart failure in diabetic patients (Piccini et al., 2004). As previously reported by our group (Lacombe et al., 2007, 2010), this animal model displayed mild diastolic dysfunction, as evident by the alterations of Doppler flow-derived parameters (i.e., increased isovolumic relaxation time and MPI). Since early relaxation is an active process regulated by SR Ca2+ handling, impaired myocardial relaxation is characterized by disturbances in calcium homeostasis rather than by fibrosis (Fang et al., 2004; Lacombe et al., 2007). Similarly, we did not detect significant amount of fibrosis in (untreated and treated) diabetic rats, suggesting that the impaired ventricular relaxation rather than increased myocardial stiffness primarily accounts for the negative lusitrope manifested during ventricular filling in our type 1 diabetic model. This is in agreement with previous studies that reported no difference in myocardial collagen (vs. control group) in a similar type 1 diabetic model exhibiting mild diastolic dysfunction (Dent et al., 2001). In addition, it has been suggested that diabetes mellitus can produce diastolic dysfunction before the development of myocardial fibrosis due to formation of AGEs, although the mechanisms were not investigated (Norton et al., 1996; Fang et al., 2004). Therefore, this model of mild diastolic dysfunction allows us to evaluate the effect of AGEs on ventricular relaxation and its underlying alterations in SR Ca2+ homeostasis before the development of marked fibrosis. AGEs are proteins that accumulate in the plasma of diabetic patients as a result of the persistent hyperglycemia and are closely linked with cardiovascular diseases. During diabetes, and to a lesser extent during aging, AGEs also accumulate at an accelerated rate in various cell types (in days to weeks) and produce multiple-organ dysfunction (Cooper, 2004; Hartog et al., 2007). In the heart, AGE accumulation contributes to diastolic dysfunction, by inducing myocardial fibrosis and stiffness (Norton et al., 1996; Asif et al., 2000; Vaitkevicius et al., 2001; Aronson, 2003; Candido et al., 2003; Liu et al., 2003; Bakris et al., 2004; van Heerebeek et al., 2008). However, its role in the development of diastolic dysfunction secondary to impaired ventricular relaxation, determined principally by the rate of resequestration of Ca2+ into the SR rather than by an increased myocardial fibrosis, is not known. In the present study, we investigated a novel mechanism by which AGE accumulation functionally impairs SR Ca2+ regulatory proteins (especially SERCA pump), by use of an antiglycation therapeutic agent: dimethyl-3-phenacylthiazolium chloride (alagebrium chloride or ALT-711), which chemically breaks AGE cross-links. This compound has been tested in several pre-clinical animal studies, and has been shown to significantly reduce cardiac AGE level in STZ-induced diabetic rats and prevent diabetes-induced structural changes in the myocardium (Asif et al., 2000; Vaitkevicius et al., 2001; Candido et al., 2003; Liu et al., 2003; Vasan et al., 2003; Bakris et al., 2004; Cooper, 2004). In contrast with inhibitors of AGE cross-link (e.g., aminoguanidine), AGE cross linkage breakers prevent but also reverse the cross-link process once it has already been established (Norton et al., 1996). Therefore, one could argue that similar beneficial therapeutic effects of ALT-711, as the ones observed in this study, could be obtained once diabetic cardiomyopathy has been established. Since ALT-711 therapy was administered at the onset of diabetes in our study, further studies will be required to confirm its therapeutic effect in subjects with established diabetes. Overall, our in vivo data suggested that long-term treatment with ALT-711 improved the clinical condition of treated diabetic rats, as evident by the increase (although not statistically significant) in body weight. Importantly, ALT-711 therapy partially improved diastolic function, as evident by the attenuation of the prolongation in isovolumic relaxation time and MPI observed in treated diabetic rats. In addition, because of the lack of significant fibrosis in the diabetic myocardium, our data suggested that the beneficial therapeutic effects of ALT-711 were primarily due to improved ventricular relaxation. In isolated cardiac myocytes of diabetic animals, we observed prolonged Ca2+ transient decay, reduced intra-SR Ca2+ stores and Ca2+ transient amplitude and decreased SERCA2a protein content, which are all consistent with decreased SR Ca2+ reuptake during the relaxation phase, as previously reported by our group (Lacombe et al., 2007). Extensive studies have tried to unravel metabolic disturbances and intracellular targets that lead to impaired Ca2+ homeostasis and to the diabetic cardiomyopathic phenotype, a complex multifactorial disorder (Poornima et al., 2006). AGE accumulation during diabetes, and to a less extent during aging, could contribute to the observed cardiomyopathy, since AGEs form irreversible cross-links with many proteins with low turnover rates, such as collagen but also intracellular cardiac SR proteins (i.e., SERCA2a pump and RyR2); however, their pathogenic role on excitation-contraction coupling has not been investigated. Since treatment with AGE crosslink breakers has been shown to completely prevent or reduce the formation of AGEs (Wolffenbuttel et al., 1998; Cooper et al., 2000; Vaitkevicius et al., 2001; Candido et al., 2003; Vasan et al., 2003; Forbes et al., 2004), ALT-711 treatment could have a potential beneficial effect on the function of SR Ca2+ regulatory proteins. Following ALT-711 therapy, we observed a normalization of the Ca2+ transient decay, and a partial restoration of intra-SR Ca2+ stores and SERCA2a protein expression in diabetic cardiac myocytes. These data suggested that ALT-711 treatment decreased excess accumulation of AGEs on the SERCA pump by breaking the cross-links that form during diabetic cardiomyopathy, resulting in partial improvement of SERCA activity and SR Ca2+ reuptake. The enhanced SR Ca2+ reuptake during the relaxation phase of cardiac myocytes resulted in partial improvement of in vivo diastolic function in treated diabetic subjects. In contrast, treatment with the AGE cross-link breaker for 8 weeks did not normalize Ca2+ transient amplitude in isolated diabetic myocytes. In correlation with these in vitro findings, we observed a moderate, but persistent, reduction of cardiac contractility in diabetic animals treated with ALT-711, as evident by the mild decrease in EF in both untreated and treated diabetic animals. One surprising finding was the persistent decrease in Ca2+ transient amplitude in the face of partially restored SR Ca2+ load and RyR2 protein expression in the diabetic myocardium after ALT-711 treatment. Since abnormal SR Ca2+ release during diastole has been reported in diabetic myocytes (Shao et al., 2007), one could hypothesize that ALT-711 therapy may also improve Ca2+ homeostasis by stabilizing RyR mediated SR Ca2+ release during the relaxation phase of diabetic cardiac myocytes and that AGE accumulation may also impair RyR function leading to diastolic SR Ca2+ leak. Therefore, these data further support the concept that AGE accumulation may play a larger pathogenic role during diastolic dysfunction and diastolic heart failure than during systolic dysfunction (Hartog et al., 2007). Treatment with an AGE cross-link breaker partially attenuated the alterations associated with cardiac function and SR Ca2+ handling during diabetic cardiomyopathy. Since diabetic cardiomyopathy is a multifactorial disorder, these data suggest that AGE accumulation contributes to the impairment in excitation-contraction coupling by altering the function of SR Ca2+ regulatory proteins, leading to a decreased ability for the diabetic myocardium to relax. Therefore, findings from this study provide novel mechanistic insights related to the pathogenic role of AGE accumulation on SR Ca2+ handling in cardiac myocytes. Finally, since there is currently a lack of specific therapy to improve LV relaxation, findings from this study could have direct practical implications for the development of therapeutic strategies for patients with diabetic cardiomyopathy. We gratefully acknowledge Natalie Virell, Hsiang-Ting Ho, Dr. Amanda Waller and Dr. Amy Gerwitz for their excellent technical assistance with the experiments and data analysis. Support was provided by the American Heart Association, Great River affiliate (Grant-In-Aid to Véronique A. Lacombe) and by National Institutes of Health Grants (to Véronique A. Lacombe and R00 HL091056 to Brandon J. Biesiadecki). Amos, A. F., McCarty, D. J., and Zimmet, P. (1997). The rising global burden of diabetes and its complications: estimates and projections to the year 2010. Diabet. Med. 14, S1–S85. Aronson, D. (2003). Cross-linking of glycated collagen in the pathogenesis of arterial and myocardial stiffening of aging and diabetes. J. Hypertens. 21, 3–12. Asif, M., Egan, J., Vasan, S., Jyothirmayi, G. N., Masurekar, M. R., Lopez, S., Williams, C., Torres, R. L., Wagle, D., Ulrich, P., Cerami, A., Brines, M., and Regan, T. J. (2000). An advanced glycation end product cross-link breaker can reverse age-related increases in myocardial stiffness. Proc. Natl. Acad. Sci. U.S.A. 97, 2809–2813. Aurigemma, G. P., Zile, M. C., and Gaasch, W. H. (2006). Contractile behavior of the left ventricle in diastolic heart failure: with emphasis on regional systolic function. Circulation 113, 296–304. Bakris, G. L., Bank, A. J., Kass, D. A., Neutel, J. M., Preston, R. A., and Oparil, S. (2004). Advanced glycation end-product cross-link breakers. A novel approach to cardiovascular pathologies related to the aging process. Am. J. Hypertens. 17, 23S–30S. Bers, D. M. (2002). Cardiac excitation-contraction coupling. Nature 415, 198–205. Bidasee, K. R., Nallani, K., Yu, Y., Cocklin, R. R., Zhang, Y., Wang, M., Dincer, U. D., and Besch, H. R. Jr. (2003). Chronic diabetes increases advanced glycation end products on cardiac ryanodine receptors/calcium-release channels. Diabetes 52, 1825–1836. Bidasee, K. R., Zhang, Y., Shao, C. H., Wang, M., Patel, K. P., Dincer, U. D., and Besch, H. R. Jr. (2004). Diabetes increases formation of advanced glycation end products on Sarco(endo)plasmic reticulum Ca2+-ATPase. Diabetes 53, 463–473. Boudina, S., and Abel, E. D. (2007). Diabetic cardiomyopathy revisited. Circulation 115, 3213–3223. Candido, R., Forbes, J. M., Thomas, M. C., Thallas, V., Dean, R. G., Burns, W. C., Tikellis, C., Ritchie, R. H., Twigg, S. M., Cooper, M. E., and Burrell, L. M. (2003). A breaker of advanced glycation end products attenuates diabetes-induced myocardial structural changes. Circ. Res. 92, 785–792. Choi, K. M., Zhong, Y., Hoit, B. D., Grupp, I. L., Hahn, H., Dilly, K. W., Guatimosim, S., Lederer, W. J., and Matlib, M. A. (2002). Defective intracellular Ca2+ signaling contributes to cardiomyopathy in Type 1 diabetic rats. Am. J. Physiol. Heart Circ. Physiol. 283, H1398–H1408. Cooper, M. E. (2004). Importance of advanced glycation end products in diabetes-associated cardiovascular and renal disease. Am. J. Hypertens. 17, 31S–38S. Cooper, M. E., Thallas, V., Forbes, J., Scalbert, E., Sastra, S., Darby, I., and Soulis, T. (2000). The cross-link breaker, N-phenacylthiazolium bromide, prevents vascular advanced glycation end-product accumulation. Diabetologia 43, 660–664. Dent, C. L., Bowman, A. W., Scott, M. J., Allen, J. S., Lisauskas, J. B., Janif, M., Wickline, S. A., and Kovacs, S. J. (2001). Echocardiographic characterization of fundamental mechanisms of abnormal diastolic filling in diabetic rats with a parameterized diastolic filling formalism. J. Am. Soc. Echocardiogr. 14, 1166–1172. Dirksen, W. P., Lacombe, V. A., Chi, M., Kalyanasundaram, A., Viatchenko-Karpinski, S., Terentyev, D., Zhou, Z., Vedamoorthyrao, S., Li, N., Chiamvimonvat, N., Carnes, C. A., Franzini-Armstrong, C., Györke, S., and Periasamy, M. (2007). A mutation in calsequestrin CSQD307H impairs SR calcium storage and release functions and causes polymorphic ventricular tachycardia in mice. Cardiovasc. Res. 75, 69–78. Fang, Z. Y., Prins, J. B., and Marwick, T. H. (2004). Diabetic cardiomyopathy: evidence, mechanisms, and therapeutic implications. Endocr. Rev. 25, 543–567. Forbes, J. M., Yee, L. T., Thallas, V., Lassila, M., Candido, R., Jandeleit-Dahm, K. A., Thomas, M. C., Burns, W. C., Deemer, E. K., Thorpe, S. M., Cooper, M. E., and Allen, T. J. (2004). Advanced glycation end product interventions reduce diabetes-accelerated atherosclerosis. Diabetes 53, 1813–1823. Hartog, J. W., Voors, A. A., Bakker, S. J., Smit, A. J., and van Veldhuisen, D. J. (2007). Advanced glycation end-products (AGEs) and heart failure: pathophysiology and clinical implications. Eur. J. Heart Fail. 9, 1146–1155. Kubalova, Z., Terentyev, D., Viatchenko-Karpinski, S., Nishijima, Y., Györke, I., Terentyeva, R., da Cunha, D. N., Sridhar, A., Feldman, D. S., Hamlin, R. L., Carnes, C. A., and Györke, S. (2005). Abnormal intrastore calcium signaling in chronic heart failure. Proc. Natl. Acad. Sci. U.S.A. 102, 14104–14109. Lacombe, V. A., Terentyev, D., Viatchenko-Karpinski, S., Hamlin, R. L., Györke, S., and Carnes, C. (2010). Diltiazem treatment attenuates arrhythmogenesis during diabetic cardiomyopathy by stabilizing ryanodine receptors-mediated sarcoplasmic reticulum calcium release (abstract). Circulation 122, A20958. Lacombe, V. A., Viatchenko-Karpinski, S., Terentyev, D., Sridhar, A., Emani, S., Bonagura, J. D., Feldman, D. S., Györke, S., and Carnes, C. A. (2007). Mechanisms of impaired calcium handling underlying subclinical diastolic dysfunction in diabetes. Am. J. Physiol. Regul. Integr. Comp. Physiol. 293, R1787–R1797. Lagadic-Gossmann, D., Buckler, K. J., Le Prigent, K., and Feuvray, D. (1996). Altered Ca2+ handling in ventricular myocytes isolated from diabetic rats. Am. J. Physiol. 270, H1529–H1537. Liu, J., Masurekar, M. R., Vatner, D. E., Jyothirmayi, G. N., Regan, T. J., Vatner, S. F., Meggs, L. G., and Malhotra, A. (2003). Glycation end-product cross-link breaker reduces collagen and improves cardiac function in aging diabetic heart. Am. J. Physiol. Heart Circ. Physiol. 285, H2587–H2591. Ma, H., Li, S. Y., Xu, P., Babcock, S. A., Dolence, E. K., Brownlee, M., Li, J., and Ren, J. (2009). Advanced glycation endproduct (AGE) accumulation and AGE receptor (RAGE) up-regulation contribute to the onset of diabetic cardiomyopathy. J. Cell. Mol. Med. 13, 1751–1764. Meurs, K., Lacombe, V. A., Dryburgh, K., Fox, P. R., and Kittleson, M. D. (2006). Differential expression of the cardiac ryanodine protein in normal dogs and boxer dogs with arrhythmogenic right ventricular cardiomyopathy. Hum. Genet. 120, 111–118. Netticadan, T., Temsah, R. M., Kent, A., Elimban, V., and Dhalla, N. S. (2001). Depressed levels of Ca2+-cycling proteins may underlie sarcoplasmic reticulum dysfunction in the diabetic heart. Diabetes 50, 2133–2138. Norton, G. R., Candy, G., and Woodiwiss, A. J. (1996). Aminoguanidine prevents the decreased myocardial compliance produced by streptozotocin-induced diabetes mellitus in rats. Circulation 93, 1905–1912. Piccini, J. P., Klein, L., Gheorghiade, M., and Bonow, R. O. (2004). New insights into diastolic heart failure: role of diabetes mellitus. Am. J. Med. 116, 64S–75S. Pierce, G. N., and Russell, J. C. (1997). Regulation of intracellular Ca2+ in the heart during diabetes. Cardiovasc. Res. 34, 41–47. Poornima, I., Parikh, P., and Shannon, R. (2006). Diabetic cardiomyopathy: the search for a unifying hypothesis. Circ. Res. 98, 596–605. Ratnadeep, B., Gavin, O. Y., Xiuhua, W., Liyan, Z., John, U. R., Gary, L. D., and Zamaneh, K. (2009). Type 1 diabetic cardiomyopathy in Akita (Ins2 WT/C96Y) mouse model is characterized by lipotoxicity and diastolic dysfunction with preserved systolic function. Am. J. Physiol. Heart Circ. Physiol. 297, H2096–H2108. Shao, C. H., Rozanski, G. J., Patel, K. P., and Bidasee, K. R. (2007). Dyssynchronous (non-uniform) Ca2+ release in myocytes from streptozotocin-induced diabetic rats. J. Mol. Cell. Cardiol. 42, 234–246. Su, H. M., Lin, T. H., Voon, W. C., Lee, K. T., Chu, C. S., Lai, W. T., and Sheu, S. H. (2006). Differentiation of left ventricular diastolic dysfunction, identification of pseudonormal/restrictive mitral inflow pattern and determination of left ventricular filling pressure by Tei index obtained from tissue Doppler echocardiography. Echocardiography 23, 287–294. Vaitkevicius, P. V., Lane, M., Spurgeon, H., Ingram, D. K., Roth, G. S., Egan, J. J., Vasan, S., Wagle, D. R., Ulrich, P., Brines, M., Wuerth, J. P., Cerami, A., and Lakatta, E. G. (2001). A cross-link breaker has sustained effects on arterial and ventricular properties in older rhesus monkeys. Proc. Natl. Acad. Sci. U.S.A. 98, 1171–1175. van Heerebeek, L., Hamdani, N., Handoko, M. L., Falcao-Pires, I., Musters, R. J., Kupreishvili, K., Ijsselmuiden, A. J., Schalkwijk, C. G., Bronzwaer, J. G., Diamant, M., Borbély, A., van der Velden, J., Stienen, G. J., Laarman, G. J., Niessen, H. W., and Paulus, W. J. (2008). Diastolic stiffness of the failing diabetic heart. Importance of fibrosis, advanced glycation end products, and myocyte resting tension. Circulation 117, 43–51. Vasan, S., Foiles, P., and Founds, H. (2003). Therapeutic potential of breakers of advanced glycation end product-protein crosslinks. Arch. Biochem. Biophys. 419, 89–96. Ware, B., Bevier, M., Nishijima, Y., Rogers, S., Carnes, C. A., and Lacombe, V. A. (2011). Chronic heart failure selectively induces regional heterogeneity of insulin-responsive glucose transporters. Am. J. Physiol. Regul. Integr. Comp. Physiol. 301, R1300–R1306. Wolffenbuttel, B. H., Boulanger, C. M., Crijns, F. R., Huijberts, M. S., Poitevin, P., Swennen, G. N., Vasan, S., Egan, J. J., Ulrich, P., Cerami, A., and Levy, B. I. (1998). Breakers of advanced glycation end products restore large artery properties in experimental diabetes. Proc. Natl. Acad. Sci. U.S.A. 95, 4630–4634. Zhong, Y., Ahmed, S., Grupp, I. L., and Matlib, M. A. (2001). Altered SR protein expression associated with contractile dysfunction in diabetic rat hearts. Am. J. Physiol. Heart Circ. Physiol. 281, H1137–H1147. Accepted: 04 July 2012; Published online: 19 July 2012. Copyright © 2012 Kranstuber, del Rio, Biesiadecki, Hamlin, Ottobre, Gyorke and Lacombe. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.
2019-04-25T10:23:30Z
https://www.frontiersin.org/articles/10.3389/fphys.2012.00292/full
The historical problem of trichinae infection in pigs is responsible for strict federal control of methods used to prepare ready-to-eat pork products in the U.S., and expensive carcass inspection requirements in Europe. While prevalence has declined considerably in U.S. pigs, the lowest prevalence rates in domestic pigs are found in countries where meat inspection programs have been in place for many years; these countries consider themselves essentially free of trichinae. Despite the fact that trichinae is rare in today’s industry, pork still suffers from its legacy. In the U.S., the traditional approach for trichinae control is strict control of processed products to inactivate trichinae and warnings to consumers of the need to cook fresh pork. An alternative method of testing pigs for trichinae infection is an indirect method which looks for antibodies to the parasites in pig blood. These include herd testing to prove that trichinae infection is not present or raising pigs under conditions which prevent exposure. The OIE Code states the following: A country, or part of the territory of a country may be considered free from trichinae in domestic swine when: 1) trichinellosis humans and animals is compulsorily notifiable in the country; 2) there is in force an effective disease reporting system shown to be capable of capturing the occurrence of cases; and 3) it has been found that trichinae infection does not exist in the domestic swine population as determined by regular testing of a statistically significant sample of the population; or 4) trichinellosis has not been reported in five years and a surveillance program shows that the disease is absent from wild animal populations. Farm certification as a method of trichinae control – Like Canada and many other developed countries, the U.S. has an extremely low incidence of trichinae infection in pigs. Although human trichinellosis is a reportable disease, the U.S. has no history of regular testing to determine trichinae infection in pigs, nor do most states require reporting of trichinae infection in pigs if found. Considering the existing public perception of trichinae as a problem coupled with the reality of a very low level of occurrence, the U.S. pork industry would likely benefit substantially from a program which assured the absence of trichinae from pigs. Efforts to certify pork free from trichinae should have an immediate impact on international markets by producing a product which is competitive with countries which currently inspect for trichinae. The U.S. pork industry can’t catch up with the rest of the world on trichinae by starting now to test pigs at slaughter. I’ve put together these steps and first-rate tips for you to start a wonderful journey to raising goats from scratch and you can avoid the costly mistakes along the way. Discover the management goat program to guide you from choosing the fencing, feeder, water container, equipments, nutrition, health, feeding, showing, etc. The content covers developing your own time and budget for starting to raise goats, selection criteria of goats, disease descriptions, nutrition plan, production plan, training show goats and showmanship. Discover housing plan and equipments for small herds of goats. If you decide to build facility for your goats, be sure to check this manual as it covers milk house, milking room, milking stand for goats, feed racks for goats, keyhole goat feeder, walk-thru milking parlor for goats, loose housing for 20 goats and kids, milking barn and milkhouse for 10 goats, etc. This 250 pages of ebook explains from A to Z about raising angora goats. Learning to raise goats from one breed can give you another perspective of raising goats in general. This informative guide shares with you the goat meat demand, import/export, seasonal trends, ethnic populations and immigration patterns which affects the goat meat consumers. An extensive resource list is included and the guide covers on raising goats on pasture, controlled grazing, supplemental feeding, body condition scoring, reproduction, kid management, health concerns and marketing of goats. Yes Ted! I am eager to start learning how to raise goat and claim my copy of the How To Raise Goats – The BeginnersGuide To Raising Goats’ for over 50% OFF. All Yours Instantly For:$187. Ive already explained to you exactly how this comprehensive guide can help you – itll teach you the knowledge I wish I knew when I first started out to raise goat – saving you both time and money! If YOU want to learn How To Raise Goats without making the costly mistakes the ‘The Beginners Guide to Raising Goats is For YOU! LIV – 100 Powder & Liquid: Natural Herbal Avail from us, our natural herbal liver tonic, which is cost effective and highly efficient product along with this we also offer swine health products. Dosage for Powder: Minimum 500 gms to 1 kg mixed in 1 M.T of feed. Dosage for powder: Minimum 1 kg mixed in 1 M.T of feed. Powder: – 1 Kg and 20 Kg HERBALEAN Powder: Our manufactured Herbalean powder is made from herbal preparation, in order to keep the body fat free, healthy & lean. AROSTRESS Powder & Liquid: Our Arostress powder & liquid is a herbal anti-stress tonic, adaptogens for stress & metabolic regulation. Dosage for powder : Minimum 1 kg mixed in 1 M.T of feed. Dosage for Powder: In Prevention 1-2 kg/ M.T of feed depends on conditions. Powder: – 1 Kg and 20 Kg DESENTA POWDER & LIQUID It’s an anti – diarrhoeal, promotes immunity against diarrhoea to swine. Powder: – 1 Kg and 20 Kg CHOLINE-H Powder Our choline-h powder is a natural source of 100% assimiable herbal choline & biotin. Dosage for Sow: Powder: 10-12 gms twice daily Liquid: 20-40ml twice daily Presentation: Liquid 450 ml, 900ml Powder 1 Kg, 20Kg. DHETASOLE Capsules/Powder Our capsules/powder is used as heat inducer, which brings the female animal in proper heat. Semi-herbal Feed Supplements» TOXSTOP Powder Toxtop Powder is a herbal toxin binder that detoxifies the system & eliminates toxins. Dosages: Swine : 5 ml Daily NURIMIN SUPER FORTE Powder Nurimin Super Forte is a powder mineral mixture with Vitamin-A. Overcomes Anoestrus and Infertility Prevents Hemoglobin urea Overcomes prolapse, anemia and prevents rickets Removes emaciation, general debility and unthriftness Increases libido. 5 Secrets To Good Guinea Pig Health! Walsh MC, Sholly DM, Hinson RB, Saddoris KL, Sutton AL, Radcliffe JS, Odgaard R. Murphy J, Richert BT. Effects of water and diet acidification with and without antibiotics on weanling pig growth and microbial shedding. 3. Stein H. Feeding the pig’s immune system and alternatives to antibiotics. 271 FACT Sheet: In-feed antibiotics Antimicrobial agents, such as antibiotics, have been used in pig production for over 50 years. Early studies indicated significant improvements in pig growth performance when antibiotics were fed. With the improvements in production practices and health status of pig herds, positive responses to in-feed antibiotics may not be as large in today’s modern facilities. Some of the proposed possible mechanisms by which antibiotics improve growth include inhibition of subclinical pathogenic bacterial infections; reduction of microbial metabolism products that may negatively affect pig growth; inhibition of microbial growth, thereby increasing nutrients available to the pig; and an increase in uptake and utilization of nutrients through the intestinal wall.1 Efficacy of in-feed antibiotics Studies2 on the effects of antibiotic feed additives have indicated significant improvements in growth rate and feed efficiency. A more recent study3 on the use of in-feed antibiotics in modern production systems showed that such additives are still effective in improving growth in nursery pigs, although the magnitude of the response is less. Choosing the proper antibiotic When the antibiotic appropriate for a specific herd is selected, a number of important things must be considered, for example, the disease organisms present in the herd. While in-feed antibiotic use is most prevalent in nursery diets, it is sometimes necessary to use antibiotics in grow-finish diets, eg, during outbreaks of bacterial disease. Journal of Swine Health and Production – September and October 2009 Table 2: Effectiveness of in-feed antibiotics in nursery and grow-finish pigs reared in modern production systems* Parameter Control Antibiotic† ADG 0.96† 1.01† F:G 1.44 1.42 ADG 1.72 1.72 F:G 2.90 2.90 Nursery phase Grow-finish phase * Adapted from Dritz et al, 2002.3 Data from five and four experiments, involving 3648 and 2660 pigs, for the nursery and grow-finish phases, respectively. Proper use of in-feed ­antibiotics While most in-feed antibiotics are available without veterinary supervision, they should not be used indiscriminately. Because of the improvements made in housing, nutrition, production, and health-management practices over the years, the impact of antibiotics on growth performance may not be as large or as consistent in response as those observed during the early years of antibiotic use. The Long Haired German Shepherd is a breed of medium to a large-sized working dog that originated in Germany. Even 2 short haired German Shepherds could produce longhaired offspring if the gene is present in their DNA. While these rare Long Haired German Shepherd may be the result of genetics, they are becoming increasingly popular with dog owners. Some dog owners prefer a dog with fluffy long-haired, as well as the Long Haired German Shepherd does not disappoint. Normal German Shepherd has double coats whereas a true Long Haired German Shepherd does not. The Long Haired German Shepherd is a medium to a large dog weighing 66 to 88 lbs as well as will stand approximately 25 inches tall. The Long Haired German Shepherd is extremely playful as well as enjoy playing with toys and also their family members. The Long Haired German Shepherd is an inactive dog when indoor, consequently having an apartment is fine as long as the dog is exercised on a regular basis. It’s not a difficult task to take care of your Long Haired German Shepherd there are a couple of things that you need to remember. He is moderate maintenance in terms of grooming, with its long coat, the Long Haired German Shepherd is constantly shedding its hair and also their hair tends to get matted as well as stuck together if it is not cared for and brushed properly. There are several accepted methods of house training your new Long Haired German Shepherd puppy. In general, if properly cared for as well as provided everything they need to properly thrive in life and life, the Long Haired German Shepherd is expected to live for around for a happy 10-14 years. The Lovely Long Haired German Shepherd The Long Haired German Shepherd is a breed of medium to a large-sized working dog that originated in Germany. 1.) Using a venn diagram, the students will distinguish the similarities and differences between the story The Three Little Pigs and the Big Bad Wolf, and the story The Three Little Wolves and the Big Bad Pig. The students will demonstrate prior knowledge of the story, The Three Little Pigs and the Big Bad Wolf through class discussion. Have students sit down and ask the class who has heard of the story The Three Little Pigs and the Big Bad Wolf. Ask questions about what only happened in the three little pigs story and write one answer in that circle. Ask what only happened in the three little wolves story and write one answer in that circle. Then ask what happened in both stories and write one answer in the joint part of the circle. The students will be assessed on their knowledge of the story The Three Little Pigs and the Big Bad Wolf through questioning during class discussion. The students should display accurate information about the story. The students will be evaluated on their understanding of the concept of comparing and contrasting the two stories through their ability to create an accurate venn diagram. One alternative strategy would be to have the class present a puppet show comparing and contrasting the two stories. Half the class could act out one story and the other half could act out the other story. Then each side could discuss what was different and what was the same between the story they acted out and the story the other half of the class acted out. McDonalds is not really good enough to feed to pigs because at the end of the day, humans will eat the pigs so it is not a safe thing to do. I would not even feed it to animals that are not in the human food chain because McDonalds is just not healthy. McDonalds is a worldwide exporter of obesity, diabetes and heart disease. Hamburger chef Jamie Oliver has won his long-fought battle against one of the largest fast food chains in the world – McDonalds. After Oliver showed how McDonalds hamburgers are made, the franchise finally announced that it will change its recipe, and yet there was barely a peep about this in the mainstream, corporate media. McDonalds is without any morals, money is ruler and king, the be all and the end all. In reply to all of the bad press this process has received from Oliver, the company Arcos Dorados, the franchise manager for McDonalds in Latin America, said such a procedure is not practiced in their region. McDonalds is lucky that someone does not force their top management to eat their own muck day in and day out for every meal. On the official website of McDonalds, the company claims that their meat is cheap because, while serving many people every day, they are able to buy from their suppliers at a lower price, and offer the best quality products. On the site, McDonalds has admitted that they have abandoned the beef filler from its burger patties. McDonald’s use 19 ingredients in their french fries alone but that is after they have been grown with very strong pesticides, including probable carcinogens like chlorothalonil and pendimethalin, chlorpyrifos, PCNB and 2,4‐D. Pesticide drift is a major problem, negatively impacting public health and the environment killing livestock of the neighboring farmers. If you still prefer to eat at McDonald’s good luck to you. Different cattle feeding production systems have separate advantages and disadvantages. Some corn-fed cattle are fattened in concentrated animal feeding operations known as feed lots. Because much of the land is better suited for cattle grazing than crop growing, it raises 40 percent of the cattle in Canada – about five million head. The other three western provinces are also well endowed with grazing land, so nearly 90 percent of Canadian beef cattle are raised in Alberta and the other western provinces. According to the United States Department of Agriculture there are 25-33 million head of feed cattle moving through custom and commercial cattle feedyards annually. The feed cattle enterprise is an industry where millions of dollars move through these custom and private cattle feeding facilities every year. The business of feeding cattle is based on a commodity market mechanism. Cattle production worldwide is differentiated by animal genetics and feeding methods, resulting in differing quality types. Grain-fed cattle have more internal fat which results in a more tender meat than forage-fed cattle of a similar age. Another effect of feeding flax in cattle ration is an observed increase in the daily dry matter intake. Although the direct beneficial effects of feeding omega-3 fatty acids remain uncertain, the preventative effect from feeding omega-3s to stressed cattle have shown great promise. In 1997, regulations prohibited the feeding of mammalian byproducts to ruminants such as cattle and goats. Campylobacter, a bacterium that can cause another foodborne illness resulting in nausea, vomiting, fever, abdominal pain, headache and muscle pain was found by Australian researchers to be carried by 58% of cattle raised in feed lots versus only 2% of pasture raised and finished cattle. Fri, 06 Jul 2018 18:13:00 GMT pigs animals that live pdf – A pig is any of the animals in the genus Sus, within the even-toed ungulate family Suidae. Pigs include the domestic pig and its ancestor, the common Eurasian wild boar, along with other species. Tue, 03 Jul 2018 09:05:00 GMT Pig – Wikipedia – All the information you need to know about guinea pigs including information on diet, bedding and more. It was first released on 23 January 1977 by Harvest Records in the United Kingdom and by Columbia Records in the United States. Fri, 06 Jul 2018 15:07:00 GMT DAFM – Transport of live animals – Agriculture – The Import Inspection Unit under the Veterinary Public Health Section is responsible for the import inspection of live food animals at the land border to ensure the safe supply of live food animals for human consumption. A … – Guinea Pigs are social companion animals that require daily interaction. FOOT AND MOUTH DISEASE Home: OIE – Help us improve GOV.UK. Don’t include personal or financial information like your National Insurance number or credit card details. A … guinea pig care & facts: how to take care of a guinea pig …importing live animals or animal products from non-eu …terrestrial animal health code home: oiefoot and mouth disease home: oieanimal health and veterinary laboratories agency – gov. Uk pigs animals that live on the farm paperback PDF ePub Mobi Download pigs animals that live on the farm paperback Books pigs animals that live on the farm paperback Page 1 sitemap index pigs animals that live on the farm paperback PDF ePub Mobi Download pigs animals that live on the farm paperback Books pigs animals that live on the farm paperback Page 2 .. Pigs are single-stomach animals and require two or three meals a day. Divide the food into two portions, feed the pigs half in the morning and the rest in the evening. It is important that small or weak pigs should be fed separately from the bigger ones, because these stronger pigs will eat all the food. If you have more than four adult pigs, then food should be divided into two containers, so that every animal can have a share. It is important that the floor of the pen should slope so that excess water can run off allowing the pen to stay dry. If water does collect in the pen, it is important to dig a drainage furrow or ditch, leading out of the pen. Make sure that this mess is cleaned out at least twice a week, to lessen the risk of disease. Food and water containers must be cleaned thoroughly at least twice a week. Pigs are pregnant for about four months and can have as many as 10 young at a time. If big and small pigs share a pen or sty there will be fighting and the smaller or weaker ones will be bullied. This will help keep the piglets warm and close to their mother. A sow with piglets must have clean water all the time and plenty of good, fresh food twice a day. Swine rations, whether bought or mixed on the farm, usually contain a ground cereal grain, a protein source, salt, a calcium source, a phosphorus source, and a vitamin-trace mineral premix. Medications, such as antibiotics, may also be added to swine rations. About 50 to 85% of the ingredients in swine rations are cereal grains. They are the main way of providing energy in swine rations. Wheat is an excellent swine feed, with an energy value of about 98 to 100% of corn. Some pork producers prefer to feed a mixture of wheat and corn or grain sorghum because they feel performance is better with a combination of grain sources. The basic ones are bred sow, nursing sow, starter rations, growing rations, and finishing rations. Some producers may use the same ration for bred sows and nursing sows. Table 1 gives example swine rations that may be mixed on the farm if a pork producer has a feed mill. Producers can often get a custom mix of a complete ration made at a feed mill. Each base mix is then mixed with soybean meal and ground grain to produce the ration. See your 4-H leader, county extension agent, feed dealer, or veterinarian on choice of antibiotics to use in a ration. Please fill me in on the best practices for a pig farmer. The abuse comes in many forms starting with housing; where farmers are tempted to keep many pigs in a small house. It is common to see many pigs in overcrowded stall on concrete floors without straw for bedding or rooting. As a sign of stress such pigs will fight and bite each other. For pigs kept indoors, at least there should be provisions for them to express normal behaviour. This isn’t true, pigs are among few animals that don’t mix their food with waste. Given enough room, pigs will defecate at the furthest corner from where their feed is served. This false belief is a creation of farmers who keep pigs in squalid conditions and later blame it on the pigs. A good house should protect the pig from extreme weather conditions. Transportation causes a lot of stress to pigs; unlike other animals pigs don’t have sweat glands when transported on a hot day; this causes heat stress which can be fatal. To minimise this stress, pigs should be preferably be transported on a slightly cooler day – early in the morning or late afternoon. Taking care of your pigs is the first obligation you should take up as a farmer. The USDA’s 2015 annual report on animal use at research facilities shows a continued decreasing trend in the number of animals used in U.S. laboratories. The report revealed that 904,147 animals covered by the Animal Welfare Act were held in labs last year, and that 767,622 were used in research, a drop of over eight percent from 2014.Hamsters are among the most used animals in labs, but their numbers decreased by almost 20 percent in 2015. Over half of the hamsters were used in experiments involving pain. There were increases in the number of other animals used in experiments. While much of this data is welcome news, it’s important to note that only animals covered by the AWA are included in this report. Since rats, mice, birds, and fish do not fall under the umbrella of the AWA, labs are not required to count them, yet AAVS estimates that they comprise 95 percent of all animals in labs. Unlike the U.S., the European Union and Canada provide regulatory and legal protection for these animals. Please visit USDA’s website for more specific information about animal use. Introduction Voluntary Feed Intake and Stressors Voluntary feed intake of pigs determines nutrient intake levels and thus has a great impact on efficiency of pork production. The intensive selection programs for pig genotypes with better feed efficiency and carcass leanness has inadvertently selected pigs with reduced voluntary feed intake. Cold temperatures increase feed intake, while hot temperatures reduce feed intake when compared to temperatures in the comfort or thermal-neutral zone. A 37 per cent reduction in space allowance from 0.55 to 0.35 m2/pig for grower pigs reduced feed intake by 11 per cent, whereas 55 per cent reduction from 0.56 to 0.25 m2/pig reduced feed intake by eight per cent Hyun et al. Group size defined as number of pigs in a single pen alters the feed intake pattern of pigs, and these changes might alter overall daily feed intake. Voluntary Feed Intake and Feed Feed composition in terms of nutrient content and nutrient balance is an important determinant of feed intake. Generally, pelleting of feed reduces feed intake but results in an improved growth performance due to improved nutrient digestibility of the feed. Voluntary Feed Intake and Ingredients For pigs, information is limited about variation in voluntary feed intake among batches of ingredients. Voluntary feed intake was a better predictor for performance than AME content of ingredient samples, indicating that factors other than AME content determine voluntary feed intake of broiler chicks. Finally, the AME content of wheat and barley, voluntary feed intake and subsequent performance among ingredient batches could not be predicted accurately by chemical characteristics, but were highly Selecting for increased leanness has reduced the amount that pigs eat predictable by NIRS. The factors that determine voluntary feed intake of broiler chicks might play an important role in swine nutrition as well, and should perhaps be considered to enable predictable performance of grower-finisher pigs. The DE content of feed appears to determine feed intake of grower-finisher pigs within limits. The voluntary feed intake of pigs given feeds based on wheat bran, dried citrus pulp and grass meal, in relation to measurements of feed bulk. STAGE 4: Moderate Cognitive Decline At this point, a person has clearly visible signs of mental impairment that point to early-stage dementia or Alzheimer’s disease. Fri, 29 Jun 2018 12:09:00 GMT DEMENTIA Care Guide – Streamhoster – Care for child development: improving the care of young children. Sat, 30 Jun 2018 01:09:00 GMT Care for Child Development UNICEF – The Family and Medical Leave Act … employee is entitled to FMLA leave to care for a person who stood in loco parentis to that employee when the employee was Sat, 23 Jun 2018 07:01:00 GMT The Employee’s Guide to the Family and Medical Leave Act – To understand how your ostomy functions, you need to become familiar with the digestive tract. A Patient’s Guide to Colostomy Care This information helps you understand your surgical procedure. Tue, 26 Jun 2018 08:02:00 GMT A Patient’s Guide to Colostomy Care – nmh.org How to Care for Your Acute Low Back Pain Almost everyone will have back pain at one point in their life. This form will give us the information we need to see if you are eligible for assistance from Care 4 Kids. Sansevieria trifasciata has broad strap-like, upright leaves that are dark green and marked with a pattern of wavy cross stripes in a lighter color. Sun, 24 Jun 2018 07:43:00 GMT How to Care for a Sansevieria or Snake Plant – wikiHow Help save tax dollars by choosing to access future “Medicare & You†handbooks electronically. The measures nurses take to care for the patient enable the patient to live with as much physical, emotional, social, and spiritual well-being as possible. Although a love for children is a strong motivation, … Opening a Quality Child Care Center. OfficeWed, 20 Jul 2016 23:54:00 GMT Opening a Quality Child Care Center emergency care, she is in violation of the law and can be prosecuted,†says Thomas … Make sure staff members caring for exotic animals are up to date on their … Wed, 27 Jun 2018 17:05:00 GMT Caring For Exotic Animals – Humane Society International – How to Take Care of Chickens. These feathered friends will cluck their way into your backyard and into your heart! How to Take Care of Chickens wikiHow – Search for reporting guidelines. More information about Papua New Guinea is available on the Papua New Guinea Page and from other Department of State publications and other sources listed at the end of this fact sheet. U.S.-PAPUA NEW GUINEA RELATIONS. The United States established diplomatic relations with Papua New Guinea in 1975, following its independence from a United Nations trusteeship administered by Australia. As the most populous Pacific Island state, Papua New Guinea is important to peace and security in the Asia-Pacific region. The United States and Papua New Guinea have enjoyed a close friendship, and the U.S. Government seeks to enhance Papua New Guinea’s stability as a U.S. partner. The United States builds the capacity and resilience of Papua New Guinea to adapt to climate change through regional assistance that covers 12 Pacific Island countries. United States assistance supports Papua New Guinea’s efforts to protect biodiversity; it contributes to the Coral Triangle Initiative to preserve coral reefs, fisheries, and food security in six countries including Papua New Guinea. U.S. military forces, through Pacific Command in Honolulu, Hawaii, provide training to the Papua New Guinea Defense Force and have held small-scale joint training and engineering exercises. The United States imports modest amounts of gold, copper ore, cocoa, coffee, and other agricultural products from Papua New Guinea. Papua New Guinea is a party to the U.S.-Pacific Islands Multilateral Tuna Fisheries Treaty, which provides access for U.S. fishing vessels in exchange for a license fee from the U.S. industry. According to U.S. Census Bureau data, in 2016 the United States exported $126.8 million worth of goods to Papua New Guinea and imported $91.8 million worth. Papua New Guinea also belongs to the Pacific Islands Forum, of which the United States is a Dialogue Partner. Papua New Guinea maintains an embassy in the United States at 1779 Massachusetts Ave. The Guinea Pig course is suitable for anyone working and volunteering with guinea pigs in an animal shelter or veterinary surgery. This course is also of interest to guinea pig owners wanting to learn more about the care and welfare of guinea pigs. Module 1 examines the routine health checks required for Guinea Pigs. Learn about how to recognise and deal with parasites as well as common diseases and health disorders that affect Guinea Pigs. Learn about the nutritional requirements of Guinea Pigs and how to ensure an adequate diet that also incorporates aspects of environmental enrichment. As well as guinea pigs they include mara, capybara and several species of cavy, such as the yellow-toothed cavy. Guinea Pigs are susceptible to a wide range of diseases and disorders. As well as health care and nutrition, correct environment and sufficient exercise and enrichment also play major role in keeping guinea pigs happy and healthy. These are much richer in energy than grass, meaning that domestic guinea pigs can be quite prone to excessive weight gain. Having access to grass in a large run can certainly be considered as environmental enrichment for guinea pigs, as it allows them to express their natural grazing behaviour. This guinea pig course explores how to provide environmental enrichment, adequate nutrition, carry out health checks and how to recognise common health problems and deal with parasites. The course is of relevance to anyone working with or owning guinea pigs. The bacteria, Chlamydia caviae, normally causes pink eye in guinea pigs. Three adults in the Netherlands wound up hospitalized for pneumonia after contact with guinea pigs resulted in their infection with C. caviae. Dr. Steven Gordon, chair of infectious disease at the Cleveland Clinic, said the cases are a reminder to practice good hygiene around pets. The two people who landed in the ICU had guinea pigs as pets, and those pets had been sick with respiratory symptoms. The man had two guinea pigs, while one of the women had 25, researchers said. The other woman worked in a veterinary clinic, where she cared for guinea pigs suffering from pink eye and nasal inflammation. Doctors detected Chlamydia bacteria in samples drawn from the patients and figured it was Chlamydia psittaci, a bacteria carried by birds that’s known to cause a form of pneumonia called psittacosis, Ramakers said. The analysis also matched the DNA of C. caviae in one of the patients’ guinea pigs to the bacteria that had infected its owner. Not all guinea pigs carry C. caviae, but many likely do, Ramakers said. An earlier study found the bacteria’s DNA in 59 out of 123 guinea pigs with eye disease. Don’t give away your favorite pet guinea pig just yet, though. Guinea Pig Health and Wellness – Guinea Pigs are large rodents weighing about two pounds with a lifespan of five to seven years. Guinea pigs are easy to handle, docile creatures sturdy enough for even small children to handle. Once you have hand-tamed your guinea pig, you should let him run around in a small room or enclosed area to get some additional exercise every day. You will need to carefully check the room for any openings from which the guinea pig can escape, get lost or possibly end up hurt. Guinea pigs are very conscientious about grooming themselves, but brushing them on a regular basis will help keep their coat clean and remove any loose hairs. Long-haired guinea pigs should be brushed daily in order to prevent tangles and knots from forming. Housing – Guinea pigs are social animals preferring to live in small groups. Plan to provide at least four square feet of cage space per guinea pig. The ASPCA recommends offering small amounts of fresh fruit and vegetables to your guinea pigs every day. While guinea pig pellets and certain fruits and vegetables contain vitamin C, the best way of ensuring your pet has enough vitamin C to keep him healthy is to give him 50 mg. Veterinary Care – Most guinea pig health disorders can be prevented with appropriate housing, nutrition and daily vitamin C administration. Guinea pigs are much easier to treat in the early stages of a problem. Children should always be supervised when playing with a guinea pig. When children hold a guinea pig they should be sitting on the ground with the guinea pig in their lap this way the guinea pig does not have very far to fall if it struggles and the child lets go. If a guinea pig is dropped from any height it may suffer serious injuries such as broken legs or internal damage. In recent years it has been a trend for guinea pig dry foods to contain molasses. These bacteria produce toxins that can result in death of the guinea pig. Bran is especially good for the older guinea pig that is loosing weight. Greens are especially important for guinea pigs as they supply them with Vitamin C. Guinea pigs must have a dietary source of Vitamin C or they will become sick and die of Vitamin C deficiency. Guinea pigs obtain most of their water through leafy greens so you may notice that your guinea pig does not drink very much. The guinea pig scratches itself and large scabs form. The broken skin can become infected with bacteria and make the guinea pig very sick. If your guinea pig is itchy then it should bought into the clinic for an examination and treatment. If your guinea pig is showing these symptoms it should be bought into the clinic for examination.
2019-04-25T10:08:25Z
http://www.wepigs.com/tag/feed/
Despite being unqualified to comment on differences between Berlin and Cambridge, since I’ve only been here for a month, I figure that’s more than a lot of people, so I’m going to go ahead and do it. Also, it may seem unfair comparing a small university town in England to the capital of Germany- but there you go, those are already the first differences. I was asked yesterday what my favorite part about Cambridge is and after thinking about it for a second answered: how close everything is. Now, don’t get me wrong- the public transportation in Berlin is amazing and for every point of departure and destination, there are at least 3 options leaving within 5 minutes or so. However, the city is quite large and the commute times can be a bit rough. My brother had an hour long commute for a year and mine to the university are always at least 35 minutes (if the connections are all perfect). In comparison, almost everything within Cambridge is within 1.5 miles and a 10-minute bike ride. I go home a lot more in-between classes, library work, and evening work or activities, which is a nice lifestyle change. Another thing I really like about Cambridge is how everything feels like it’s designed to take care of as much of the extras of adulting as possible. People who live on campus in the US know this feeling of having meals and some housekeeping taken care of, but in Berlin, this is not a thing. I can have amazing meals in the Mensa (cafeteria) during the day, but on days I don’t go to the office or classes, I’m on my own. Even though I like preparing my own meals, if I didn’t I would still be much better taken care of in Cambridge. That being said, the prices here take some getting used to. Maybe it’s the conversion that’s just making me anxious, but after coming from Berlin, I feel like I’m spending twice as much a week on groceries. Thank goodness for Aldi, because that at least balances out 12 pounds spent on the cheapest entree and a beer at any pub around. At the same time, in Cambridge’s favor (and being a student here), I do appreciate the housekeeping. Initially I thought it would be weird to have someone come in my room once a week to clean- I mean, my mother taught me better! I can do it myself! But now I do appreciate it. I don’t have to think about it and can focus on work- which is the idea, of course. I also am grateful that the communal kitchens are cleaned, because from experience I know that the chore charts only work as long as EVERYONE follows it. Alright, that last one wasn’t Cambridge proper, mostly just dorm life, but since the life here seems to revolve around students (and tourism), it’s not a far stretch. That’s something I miss about Berlin- the diversity of people and diversity of the things people do. It’s the center of political and cultural life in Germany, and since those are two of my priorities, I feel quite comfortable and always engage in a lot of intellectual conversation there. Here in Cambridge, a lot of people do things other than studenting- there are tons of music and sports groups- but it feels like everyone has to do everything so well and people take themselves too seriously. They tell themselves they don’t take it so seriously, and try to be tongue-in-cheek about it all, but then they do seem ind of disappointed if you didn’t take it seriously. Furthermore, Cambridge gets a lot of credit for being an intellectual hub, but it’s like they only know how to be intellectual in theory. They’re missing some of the practicality of life getting in your face that I’m so used to in Berlin. Still, it seems like Berlin is facing its own challenges with increasing hipsterfication and gentrification, so who knows how long it is before I’m complaining about this in Berlin as well. I will say that for its small size, Cambridge offers more than its share of theater and music. Let’s see; I’ve covered food costs, student life, intellectual life- I guess what remains to comment on is the feel of the city and its architecture and green spaces. I can say that right now I appreciate Cambridge and Berlin equally- Berlin has so much innovation in its architecture and the mix of old and new just hits me every time I see it, but there’s something ultra charming about the old English houses and I also just keep stopping and snapping a photo when I see a new angle on one of the old colleges, or go down some new cobbled street. I so often feel like I can’t take a breath that’s not imbued with history. And don’t get me started on the Cam River- how clean everything feels- or the pretty flowers that are appearing everywhere. I guess, as a runner I’m also grateful for the proximity of Cambridge to seemingly endless fields. I say seemingly, because as I discovered during my first runs in Cambridge, there are a lot of private fields and, in general, just a lot of fences in Cambridge. Colleges are closed off from one another with them, streets often end in more fences. It’s a bit frustrating, since even though Berlin is a concrete jungle, I can just keep turning another corner and almost never end up in a dead-end. Furthermore, Berlin has the Grunewald and the Tierpark and various other green spaces. Cambridge’s green is around the city- still very green, but you have to find it first. And it’s not many trees- just a lot of open fields- which is lovely until you’re trying to get across in 25 mph winds. But I’m not complaining. I’m quite happy here and I’m getting done what I came here to do, so that’s the most important part. On that note, here’s the lecture hall photo I promised. It doesn’t seem as novel as it did the first time I saw and sat in it, but it’s still pretty cool. The last thing I wrote for this blog was about the major events happening in Berlin in October 2018 that I experienced as a sort of Randfigur from the sidelines. And I ended with a slightly ambiguous note about changes. It needn’t have been so ambiguous, since we are all changing all of the time, but the changes that spiral into other changes seem noteworthy, and those are the ones I would like to focus on today. I finished up teaching the first class I completely conceptualized and designed myself at a German institution, completed my first year of the PhD scholarship program, had a minor lip revision to fix something left over from cleft-lip repair, presented at two conferences, despite not having presented since April 2015, and finished writing my first chapter for the dissertation. The less perfect newsreel includes failing to meet the goal of writing two chapters by the end of the year, missing out on my brother for a few months despite us living in the same city, a few hefty debates with my family about the future, getting a speeding ticket on my way to the Darß marathon, not getting a BQ despite running 2 marathons and getting a PR (the Boston Athletic Association made the choice to lower the qualifying time a few weeks before my race), and living a perfectly single life for over a year now (but as they say, it’s better to be single and mostly happy than in an unhappy relationship). I think it’s worth mentioning the negative since otherwise I do present a heavily skewed positive impression of my year. Now, contrary to the fact that I have been incredibly lapse with this blog, I am not planning on retiring or closing it, like I did with the running one. But I am reevaluating my goals and uses for it. It’s been apparent for some time now that I have become more sensitive to differences in the US than in Germany, and I no longer find daily inspirations for things that might be interesting to a US reader, since everything I am faced with in Berlin has become more or less usual for me. Now, I experience counter culture shock when I enter a super Target and walk almost a mile around this single store, getting lost in the different departments. Attended a podium discussion about language change to reflect our more diverse societies. Organized a workshop meant to help participants identify and understand other ways of being and belonging beyond nation, culture, and genes. The good news is that despite not having a lot of motivation anymore to write about my experiences in Berlin, I am in the middle of the start of another adventure. Officially, as of last Monday, I’ll be in Cambridge, England for two terms for research and writing, but I do also plan to see some more of England and Scotland, and therefore I should have enough new and exciting things to write about. The only thing that could get in the way of that are my priorities catching up with me, as by this time next year, I should be pretty close to finishing the dissertation. Hello there! I wouldn’t be hurt if you forgot this blog existed. I kind of forget that myself too, sometimes. I tell myself, if anything, at least I’m posting at least once every calendar month. I’ll try to make up for quantity with quality! Where to begin? September in Berlin was surprisingly mild and sunny, and this carried on well into the month of October. Being quite more comfortable than July and August, where temperatures were well above 30 centigrade, everyone was happy about the weather except the flora that just didn’t get enough rain. However, as the month wore on, the leaves changed colors, the sky got darker earlier and now a few rain days have refreshed the last bits of green around here. October saw another celebration of the Germany reunification 29 years ago. And while I didn’t know that each year another German city hosts the nation for a week of celebrations, I figured it out this year since Berlin was the host. The Strasse des 17. Junis basically went from one event prep to the other as just a week prior to this, the Berlin Marathon happened, which saw the world record broken once again (7 times in 15 years!) by Eluid Kipchoge. Just a few days after the Unity Day celebraations, the lights and projectors were set up for the Festival of Lights that happens here yearly. I posted more extensively about this the first time I saw the different exhibitions two years ago. A week into the Festival of Lights (it went from 5-14 October), there was also the massive demonstration of solidarity with anti-rightwing extremeism, inclusiveness, and anti-racism in Berlin, the #unteilbar Demo. For academic reasons I missed most of the demo, but I was able to participate in the last hours. So, basically, the Strasse des 17. Junis is open for the first time in over a month. I’m sure Berlin’s car commuters are relieved about this. Other than riding up and down the street on my bike for the various events/ activities, I also crossed a major goal off the bucket list and ran in one of the most (in)famous stadiums in the world. From this, it looks like my smiling self is leading the pack. Not shown: quite a few people who had already made it around this turn of the track. The Berlin Olympic Stadium, built for the 1936 Olympics, was one of Adolf Hitler’s showcase projects before he started WWII (a very offhand way of putting it, I know. I’m sorry). Jesse Owens famously won four gold medals there in 1936, becoming a game changer much like Usain Bolt, who broke the world record here in 2009. Fun fact: my brother and I were out on a run once during that summer of 2009 and were able to sneak into the stadium to see the 300m hurdles, because we looked like we were participating athletes. I’ve also done the official hosted tour of the Stadium once or twice. However, I’ve never been able to run on that famous blue track and so when the opportunity came through the European Association for the Study of Diabetes to run a free 5k on the Olympic Stadium grounds, I kind of hoped this would include the track. And it did! And now I can say I’ve run on that track like the exceptional athletes before me. Finally, to round out the last interesting news from Dorothea in Berlin, there are various literary events happening all over the city on almost a daily basis. It’s almost more exhausting to figure out what to do than to get ready to do something, and my priorities have shifted a little from exploring to writing, but there are still opportunities to join the Friday night revelers in Kreuzberg, Neuköln, and the like, to think about the ways the city is changing. Shifting resources, shifting demographics, the city is constantly changing and sometimes, a bottle of beer in hand standing by the exit to the last station on the line, watching the people come and go to catch the connecting buses or grab a garlic-sauced Döner, thinking about the days behind me and the days ahead, I can just feel myself changing too. I’ve most likely mentioned this before, but as a runner I’ve always appreciated how easily navigable Berlin is and the ability to cross more than half the city in a 2 hour run, seeing a lot of the major monuments and landmarks in the process. As a point of reference, I ran 18 miles through London back in July and still only saw about 1/5 of the city’s major landmarks (albeit, I also got lost a bit and repeated some stretches). Despite being in Berlin for two years now, and many times many summers before 2016, I finally managed to take my camera on the run. Below, you can see my last run through Berlin before heading back to Florida for the rest of the semester break and I’ve numbered the locations of the photos I’ve taken and included in this post. It’s important to note that, seeing as I turned south and then west again at the Brandenburg Gate, this post only covers interesting points in former west Berlin. There are a lot of equally interesting and important monuments and landmarks in the former east as well! I also didn’t include the Grunewald or Wannsee, which easily make up a long run in themselves. But here goes: a guide to running this part of Berlin. Start at Theodor-Heuss Platz, which is the western end of the Kaiserdamm that begins in Berlin Mitte as Unter den Linden and extends straight for about 4 miles. This little green-covered plaza is marked by a nice blue monument that turns clear to allow the late sunlight to come through in the evenings. Heading east, you can run by with the Convention grounds and central bus station mere houses away to the south. Head east until you get to aptly named Schloßstraße, which will get you to Berlin’s largest castle with grounds that have a circumference of more than a mile. After running through the Schlosspark, which features mausoleums, flowers galore and even some sheep (royal properties are expensive to maintain, need to save somewhere), one can head out east again, this time on the Spree River that runs through Berlin. Factories often got placed on rivers in the 19th and 20th centuries, so one can find Nivea, BMW, and other well-known names running back towards the Kaiserdamm, which has now changed to Strasse des 17. Juni. One can continue to head east here until one reaches the golden angel which stands on top of the Victory Column. This is easily one of the most well-known monuments of Berlin. It features a 270 step climb to a viewing platform that I wouldn’t advise to visit on the run, but could deserve a separate visit. Instead of visiting the column, one could continue along the round about, featuring other famous monuments to pre World War generals and, of course, Bismark. These sculptures and the Tiergarten in general add to the feeling of a Berlin before the destruction of the World Wars. The park initially served as hunting grounds for the king before being transformed into a green space in the middle of the city where one could see and be seen (or not, it was also a hiding spot for some illicit activity as well) on weekends and in evenings. Continuing through the Tiergarten, one eventually comes back onto the Strasse des 17. Juni, where one eventually happens upon a monument to the Soviet’s role in World War II. The history of Berlin during the war deserves it’s own post, but it may suffice to say that Berlin was one of the last battlegrounds of the war, and the allies had agreed to the Soviets advancing on the city first, trusting they would split control of the city after the war. The war memorial just south of the Reichstag reminds of this role and of the many Soviet soldier’s lives WWII took. Like most post-war Soviet memorials, the display features impressive life-size tanks and a larger than life model of a soldier. Now, while I didn’t do this on this run, one can easily skip a little north of this memorial and see the home of the German government (Bundestag) in the Reichstag. Instead, one can also just continue heading east to find THE German monument par excellence: the Brandenburger Tor. Now, unfortunately, there’s some construction going on the right side, but at least there are not a lot of people. This is only because it is 7 AM. Come anytime after 8 AM and you won’t get a people-free shot. This is why it’s recommended to be an early-rising runner. It’s also recommended because then one can beat the crowds in this part of Berlin, which is Berlin Mitte and very popular with the tourists, politicians, and business people. It’s also near a lot of important embassies such as the French, USA, UK, and others. To continue, one can head down the east or west side of the Brandenburg gate to come back around to the front of the US embassy. From here, one can see the Memorial to the Murdered Jews of Europe. Placed on about 4 acres of land, this memorial is one of a few memorials in Berlin to victims of the Holocaust, though this memorial is specifically to the Jewish victims and some people like the author of this opinion piece explain some of the controversy of the design and the name. I personally can’t help but feel overwhelmed by the meaning of the columns and the feeling of angst incited walking among these tomb-like structures, but there is some question about the effectiveness of the reminder it represents. As a runner, I run by it, but it also deserves a separate visit. There is a documentation center in the center that takes some time to go through as well. The west side of this memorial faces the Tiergarten again, and it is this space, the southern part this time, that one can continue along, passing even more embassies. The architecture of these buildings is always unique and decorated by the flags of countries all over the world with some cultural note that could be a tour in itself. This last part of the run, other than bringing one through more of Berlin, is pretty uneventful until one gets back to the Kurfürstenstraße that leads to the Berlin Zoological Gardens and, of course, eventually the U- and S-Bahn station of the same name. I visited the Zoo with my mom and bro last year, so one can read about that here. The entrance way is iconic and there’s just a little bit of cultural appropriation here, but it is an interesting visit as well. Just a little further down the road one finds the Breitscheidplatz and the ruins of Kaiser Wilhelm Memorial Church. This summer some of the European Championships were held in Berlin, so the stands for spectating were just being taken down as I ran by. Those are obviously not always there. What is there and not photographed is the small memorial to the victims of the 2016 Christmas Market Attack (this is just behind those stands). Unfortunately, as apparent from the run, the occupants of the city cannot escape its history, as the reminders are always all around. At least there’s a lot to also keep from dwelling too much on this as well (i.e. live music at an Irish Pub advertised in this photo). Berlin is a sobering, ugly, and yet beautiful and lively history-conscious city, all at once. Speaking of not dwelling on things, the run doesn’t end here (though it easily could). For me, it lead on the Hardenbergstrasse past the station, the Technical University, and to Ernst-Reuter Platz. From there, one can head west again on the Bismarkstrasse (aka Strasse des 17. Juni aka Unter den Linden aka Kaiserdamm) until one gets back to where one started. Given the right conditions and the right training, this tour is manageable in under two hours. There are enough quick shops and stations along the way (even a Starbucks and Dunkin Donuts by the Brandenburg Gate) to get one through the run if one has some spare change. I wouldn’t encourage using the Tiergarten as a toilet, though it is possible in emergencies. However, there are some public restrooms at the Victory Column, the Gate, and near the Zoo Bahnhof. Obviously, this tour is just one of multiple options of runs to complete in Berlin. However, for the tourist who is also a long-distance runner, this does the job of seeing a lot in a little time and having a lot to write home about. Hello readers! I know that most of you who have been following my blog for a while have gotten used to the rather sporadic posting. I think, before disappearing, I’d settled with a comfortable post a month. However, to go four months without blogging? Well, I can’t really say that I’m consistent (except maybe in my inconstancy). But I have to say that when the summer semester started at the Freie Universität, I got sucked into three seminars, a colloquium, and actual writing for the dissertation. Then my parents arrived for their yearly vacation in Germany and all bets were on that I’d barely make it to WordPress. “How medical operations are like races”. I drafted something about relationships of the doctor’s appointments, pre-surgery clearance, getting high off anesthesia and feeling like crap afterwards to the reality of running something like a marathon (spoiler: the recovery is rough for both, but gets easier the more you do it!) but that post will have to wait. This post would have been in conjunction with another post that I may actually get to post at some point about having a craniofacial birth defect. “Hamburg revisited” Three years after studying in Hamburg, I finally visited with all three of my family members and was able to show them the part of the campus I studied at as well as the Speicherstadt and part of the Hafen. “Hogwarts: in real life” (these last two titles would be for the posts where I talk about my conference trip to Cambridge with a little detour to London). “Germany’s super summer and bike tours that go wrong”. Seriously. If you think it’s a bad idea to try to go out for a ride when the temps are above 30 degrees centigrade, you’re probably right. It doesn’t help getting lost or flat tires without repair kits on you, either. “Four reasons why living in a big city is not so great”. To include the noise, the smell, and close-contact with people who don’t know their limits on drugs or alcohol. Bascially a college campus on the weekend, but on a larger (maybe more dangerous) scale. “Revisiting old haunts with new eyes”. Despite traveling to the Baltic coast every summer since I was 4, there’s always something new to experience or something old to experience in new ways. And finally, “running through (west) Berlin”. Stay-tuned, because I’ll actually be posting this one! In summary, I had a good summer. Mostly work, but also a lot of play. Hope you did as well and good luck to those getting ready for new school years and semesters!! p.s. While being a goof off the web, WordPress celebrated my four years of being on their site. Yay. Happy writing anniversary to me. Before I start, no, I don’t think airports are great places for cows either. It’s been three weeks since I flew out of cold Berlin to the Sunshine State, and now I’m back again. I actually meant to write a post about the trip home right away, for, you know, prosperity’s sake. Because my opinions are, like, really important. But now I have 2 more airports to talk about. I could just scrap the post, seeing how late it is, but some interesting and funny things happened and it’s almost the end of the week (or the weekend when you read this) and you could really use a fun read, right? I’ll at least try to keep it fun. My trip on March 15th started with a 3:30 wake-up call, because the plane was leaving at 6 and I’ve heard stories of people who headed to one of Berlin’s two operational airports only to find out their plane was leaving from the other side of town. I obviously wanted to avoid missing the flight! and I wouldn’t have wanted to miss celebrating birthdays and Easter with my family, either, of course. Although I checked five times already, I still left early and it felt good to arrive at luggage check-in with more than an hour before human check-in. There, I was reminded that Tegel Airport is the small, old airport that it is. And when I mention Tegel’s age, I actually need to acknowledge Berlin history. Berlin once had 8 airports, and, as one can imagine, these were heavily used during the World War II. Most of these airports were key for the war efforts and were meant to be closed after the war. In fact, Tegel (TXL) would have been shut down after WWII if it hadn’t been for the Soviet Blockade of West Berlin in 1948 and the ensuing Berlin Airlift. But since the airport was needed, it lived to serve Berlin through the Cold War. And since the major BER airport, under construction since 2006 and meant to be opened in 2011 STILL is not opened, TXL lives to fly good people like myself in and out of the city. And that’s a wonderful thing, since while Schönefeld and BER are 18 km out of the city center, TXL is a mere 5km and 20 minute bus ride away. But woops, I got off track. So I was saying that I was reminded of TXL’s size and age, and this is because my walk from check-in to security check was 20 meters, and the security line itself was only 50 meters long. Can you imagine a time where security wasn’t necessary? You could after being in Tegel. But anyway, before I knew it, I was out of the waiting area of the terminal, and I barely had time to check out the same 5 kinds of candy and alcohol and perfume in any duty-free shop in any airport in. the. world. First stop: Charles de Gaulle Airport in Paris. I’m ignoring the namesake for brevity’s sake (look him up!), but I do want to talk about something that struck me about this large airport in Paris: its two priorities seemed to be the highest end shopping I’ve ever seen in an airport and the fancy patisseries and, well croissants. They were awesome. I felt so cool with my high school French ordering cappuccino and croissant until having to revert to English when they needed smaller change for my payment. The stark contrast between Charles de Gaulle and the German and US airports I’ve been to made me more aware of the other airports I went to: Atlanta and Fort Lauderdale International. Travelling through several international airports, I learned that one can tell a nation’s priorities based on the venues offered. European airports seem to feature a mix of shopping and food. German airports especially have a lot of news/book/paper supplies stores. But then Atlanta also surprised me with its very well done decorations between terminals. Besides having one food place next to the other, the airport still gave the impression of being interested in sharing its history, geological heritage, and culture. One of the busiest airports in the world, Atlanta (or Hartsfield–Jackson Atlanta International) airport features an indoor railway that brings people from terminal to terminal. One could opt to take the shuttle, but my layover was long enough to walk the not-so-long distance between terminal F and A and see the sights along the way. And I’m not telling you where this was, because I want to encourage you to walk and discover for yourself. Or I forgot. What really impressed me was a timeline of the history of Atlanta. Created by the artist Gary Moss, ‘A Walk Through Atlanta History’ is a permanent exhibit in the transportation mall between terminals B and C, it reminds people of the Cherokee and Creek tribes who inhabited Georgia before they were forced to leave their lands during Andrew Jackson’s presidency. For a country that likes to wash over this difficult part of its past and present, acknowledging the indigenous people of Georgia is a bold move (and if you think “duh, of course it had to be acknowledged,” check out some history books from before the 1960s). The Indian Removal Act that led to the “Trail of Tears” was not mentioned, but I admit… that may have been too much to ask (or perhaps not? Comments below!). But yes, I enjoyed my walk and after Atlanta, arriving in Fort Lauderdale was a bit of a letdown. It’s a bit too old and unimpressive with low ceilings and gray walls to really make a great airport to come home to. But I’m sure plans for renovations are in place. Okay, enough with the history lessons and facts, already. You’re probably asking where the funny stories are, that I promised. Let’s start with me informing you who/what the real MVP of my trip was: my bladder. Marathon training and international trips don’t mesh well. After the trip, I was glad for my excellent hydration habits. During? Not so much. My short trip from Atlanta to FLL was the only one where I had an aisle seat. From Paris to Atlanta, I spent 9 hours stuck between two guys and since I hate asking people (unless it’s my brother) to get up, I just kept it down to three requests to get up. Window seat guy didn’t get up once. Seriously?! See? I’m terrible at telling funny stories. I’ll try again. This time, it starts with a Ukrainian in seat C on the way from Paris to Atlanta. Not being one to start conversations with strangers, I kept my earbuds in and tried to be a responsible PhD student and work on my much-too-large for an airplane laptop, and then digressed to watching Despicable Me 3 (yes, I’m an adult child-though I laughed enough to make window seat guy start watching it too, not sure if he had much fun as I did). At some point, though, after the second time asking him to get up, aisle guy, in a thick accent proceeds to tell me that he’s Ukrainian and his English isn’t so great. Could I help him fill out the customs form? “Sure!” I say. And then we proceed down the sheet. I get through mostly okay. I don’t tell him that I took three years of Russian (which is related to Ukrainian) in college in order to avoid unnecessary attempts to hold a conversation after this good deed is done, but when we get to the question about handling livestock, I wish I had. I also wished my three years of study had taught me what livestock are in Russian. I tried as many examples of livestock that I knew. I wasn’t even sure if chickens counted as livestock. For some reason, I only mentioned hooved animals. Mostly, I was hoping he would understand with the word cow… He didn’t. And I couldn’t even tell him the Russian word, because I’ve forgotten all my Russian, it seems. So remember, kids. Take your language studies seriously. You never know when you might need them! no, I don’t think airports are great places for cows either. Let’s not forget the windmills. check out the shadow effects! Clearly, Amsterdam Airport’s priority was making sure you didn’t forget you were in the land of windmills and cows. About a week and a lifetime ago (every week is a lifetime when on a break from normal routine), I just finished attending the Berlinale. As previously explained, there are various sections of the Berlin film festival, and I attended mostly the Generation 14+ movies with a K+ Awards ceremony thrown in. It’s not that I’m not ready for the “adult” movies yet. It’s mostly that the Generation section movies are easier to coordinate and attend. And honestly, they’re not any less demanding of empathy, understanding or ability to feel sad. This year, I saw four 14+ movies: 303, which I mentioned last week, Kissing Candice, High Fantasy, and the winner of the 14+ section: Fortuna. Of the four, 303 and Kissing Candice were more about growing up and becoming an adult. However, Kissing Candice, an Irish movie with a sub-plot about a gang of unruly, drug using, violent boys, also already crossed the border of entertainment into political commentary- which is what High Fantasy and Fortuna definitely were. Before I continue, I should mention that I’ve always been weirdly involved in politics. Perhaps my transnational heritage caused me to question the point of nation, and therefore of state, and therefore of borders and therefore of what happens within those borders, etc.. But despite my degrees in literature, I’ve visited my share of political seminars and my PhD project is actually a weird intersection of literature, media, and politics (aka cultural studies) and so I consider myself qualified to talk about politics. Also, as Percy Bysshe Shelley proclaimed, “Poets are the unacknowledged legislators of the world,” so who knows! I know that you are probably rolling your eyes right now. It’s okay. I roll my eyes at myself a lot too. Anyhow, as far as Berlinale goes, my favorite movie was High Fantasy, a South African movie questioning the success of the “rainbow nation” and highlighting contemporary tensions in race politics and discrimination. It wasn’t just the topic that had me on board. I just really enjoyed the story, how the characters switched bodies, and the nod to 80s style sci-fi a la Stranger Things. Race Politics and the way People of Color were treated in South Africa through the 90s did not disappear with the end of Apartheid, of course. We see this in the USA too, and here we supposedly ended segregation decades earlier. On top of continuing tensions between People of Color and Whites in South Africa- showcased in various high profile events and protests in the past few years, there is also an increasing awareness of LGTBQ+ rights. So where do these issues find an audience? In front of the Berlin 14+ audience. Unfortunately, the movie has yet to be screened in South Africa, but maybe building up a portfolio of positive responses elsewhere will give director —- the strength (and financial means) to show it in South Africa and perhaps even get action-inciting conversation going. Similarly political, but closer to home, was the movie Fortuna. Chosen as the winner of this year’s 14+ section for the Berlinale by both the international and public jury, Fortuna was honestly one of the most difficult movies I’ve seen in a while. It wasn’t terribly traumatic or tmi. Rather, it was just painfully slow. I’m sorry that I say that like it’s a bad thing. I’m definitely a fan of artistic movies, and agree I that we don’t need fifteen shots in just as many seconds and that the Hollywood combo of comedy and action just grates the sensibility to finely shredded stinky cheese. However, some humor is needed. And while the black and white cinematography was aesthetically beautiful, and the close shots of two men having a conversation for 15 minutes quite, well unusual , I was finding it hard to focus. The story revolved around a 14-year old refugee from an African country and her fate- specifically, her ability to chose her fate (Catholic faith and questions about abortion played an important part in this movie). She had been sent to live in a Swiss convent led by a group of well-meaning, but perhaps unprepared monkish type men. The movie, despite having some artistic merit, clearly won due to its attempt to take on the socio-political topic of the decade- refugee seekers in Europe. The focus on an individual and her fate as an unaccompanied minor, as well as all the Swiss government beuracracy and the humanity in the face of inhumane forces (none the least an icy-coldness that I could feel in my own bones, despite the fact that it was only an audio-visual representation) maybe is what won. I don’t know. I can’t judge the 16 movies that were up for selection, but I felt that the other movies I had seen had just as much merit- if in different ways. But oh well. That was that. The 14+ section also featured some great, slightly politically involved movies. The short film that won was Field Guide to 12-Year Old Girls, an Australian movie that I could relate to and enjoyed, and the feature film Les Rois Mongols (meaning “the idiot kings,” awkwardly translated in English as “Cross My Heart”). This movie wasn’t as political as the 14+ section films, even if it included some Quebecois left-wing politics and French versus English Canadian identity. However, it reminded me of the need children have to be included in decisions concerning family, and of their need to be taken seriously. They can understand more than we give them credit for, and that should never be underestimated, it looks like. So there you have it. Berlinale. Four glass bears awarded and I am 7 great movies richer for it. Moving on, I also took a few hours of my Saturday (after a long run, before the K+awards ceremony) to pop into the Kennedy Museum (after spending an hour to discover that this wasn’t the same as the Kennedy Haus) and visited the photo exhibit of Pete Souza’s selection of photos taken during the Obama presidency. An exhibit for Obama in the presence of a permanent exhibit about John F. Kennedy was no coincidence. Both had a special and curated relationship with the people via the media, and this much can be seen in the photos taken of both throughout their presidencies. I personally am guilty for letting my impression of Obama take over my feelings about his political actions while in office (and afterwards). For me, he presents one of the most intelligent, upright characters that I know. This is, of course, based only on what I (am able to) know about him, but I also hope that my opinion of him is never shattered by some news about what he did in office or afterwards. At any rate, many Berlin fans will know that Kennedy once gave a speech in West Berlin where he states “ich bin ein Berliner” (I won’t explain the joke that Kennedy called himself a donut- you’ll have to look that up yourself), and the visit from which this speech came endeared Kennedy to the hearts of many Germans, and hence a whole 100+ sq. meters dedicated to the man in one of Berlin’s more expensive corners. It’s a pretty well put-together museum, and I enjoyed my hour there. I can advise it- but only if you’re not allergic to politicians and photos. And thus endeth one post, with plans for the next one to cover a trip I took to Duisburg and Den Haag. As a small preview: small Blaumeisen, which are now my favorite birds. Yellow breast and blue heads and feathers. You can’t really see it here, but it’s still a nice photo, I think.
2019-04-23T23:52:19Z
https://deutscherwanderwolf.wordpress.com/category/berlin/
Supervising students in a clinical training ward (CTW) has been used for some 20 years. Studies show that interprofessional learning gives students an opportunity to get a comprehensive view of a particular patient’s health-care needs, as well as an increased and mutual understanding of their colleague’s position and knowledge. Only a few studies have focused on the supervisor’s view of his or her own role within the activity of the CTW. The purpose of the study was to describe the CTW supervisor and his or her own perception of her role as an interprofessional supervisor as well as to describe interprofessional learning on its own at the CTW. A qualitative method was used, and 19 interprofessional supervisors from and within occupational therapy, along with physicians, physiotherapists, and nurses, were interviewed. The texts were content-analysed. Three categories were identified: ‘the supervisor’, ‘the supervision’, and ‘the concept of CTW’. It turned out that the interprofessional supervisor has a genuine interest and commitment to supervise, to work pedagogically, to collaborate, and to work with students. The supervisors all used different strategies, and they worked with the team in focus, partly for the benefit of the students but also to show the team’s importance in relation to the patient’s health care situation. The CTW concept requires lots of time and dedication from the supervisor, but it is perceived as a good concept where students can develop interprofessional collaboration. The supervisor’s understanding and approach to student learning makes a huge difference in the process of supervision. Being an interprofessional supervisor requires a pedagogical knowledge and understanding of a group and of the group process. The student’s team knowledge influences the CTW, which affects the supervision. The concept of the CTW has a positive impact on the supervisors, and the interprofessional supervision is perceived to be stimulating and challenging. Linköping University, Faculty of Health Sciences. Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Those working on the description of disordered speech are bound to be also involved with clinical phonology to some extent. This is because interpreting the speech signal is only the first step to an analysis. Describing the organization and function of a speech system is the next step. However, it is here that phonologists differ in their descriptions, as there are many current approaches in modern linguistics to undertaking phonological analyses of both normal and disordered speech. Much of the work in theoretical phonology of the last fifty years or so is of little use in either describing disordered speech or explaining it. This is because the dominant theoretical approach in linguists as a whole attempts elegant descriptions of linguistic data, not a psycholinguistic model of what speakers do when they speak. The latter is what is needed in clinical phonology. In this text, Martin J. Ball addresses these issues in an investigation of what principles should underlie a clinical phonology. This is not, however, simply another manual on how to do phonological analyses of disordered speech data, though examples of the application of various models of phonology to such data are provided. Nor is this a guide on how to do therapy, though a chapter on applications is included. Rather, this is an exploration of what theoretical underpinnings are best suited to describing, classifying, and treating the wide range of developmental and acquired speech disorders encountered in the speech-language pathology clinic. Linköping University, NISAL - National Institute for the Study of Ageing and Later Life. Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden. To investigate car use among newly retired people, to explore to what extent car transport is used for everyday mobility and how it is valued in comparison to other transport modes. The data consists of travel diaries and qualitative interviews with 24 individuals, aged between 61 and 67, living in a middle-sized Swedish city. They were recruited via the local branch of one of the main associations of pensioners, one large employer in the municipality, and through another study. The informants filled in a travel diary during 1 week that were analysed by VISUAL- TimePAcTS, an application for visualising and exploring activity diary data. The semi-structured qualitative interviews were analysed using a qualitative content analysis. The car was used for several trips daily and often for short trips. The informants had a lot of everyday projects that they would not be able to perform if they did not have access to a car. The importance of the car does not seem to have changed upon retirement, albeit it is partly used for other reasons than before. The informant’s social context implies new space-time constraints. Commitments to family members, engagement in associations and spouses’ occupations affect how much and when they use the car, and their overall mobility. Aix Marseille Université, INS, Marseille, France; Inserm, UMR_S 1106, Marseille, France. Department of Bioelectronics, Ecole Nationale Supérieure des Mines, CMP-EMSE, MOC, Gardanne, France. Anglia Ruskin Univ, England; Nottingham Biomed Res Ctr, England; Univ Nottingham, England. Objectives: Specialist tinnitus services are in high demand as a result of the negative effect tinnitus may have on quality of life. Additional clinically and cost-effective tinnitus management routes are needed. One potential route is providing Cognitive Behavioural Therapy for tinnitus via the Internet (iCBT). This study aimed to determine the efficacy of guided iCBT, using audiological support, on tinnitus distress and tinnitus-related comorbidities, in the United Kingdom. A further aim was to establish the stability of intervention effects 2-months postintervention. The hypothesis was that iCBT for tinnitus would be more effective at reducing tinnitus distress than weekly monitoring. Design: A randomized, delayed intervention efficacy trial, with a 2-month follow-up was implemented to evaluate the efficacy of iCBT in the United Kingdom. Participants were randomly assigned to the experimental (n = 73) or weekly monitoring control group (n = 73) after being stratified for tinnitus severity and age. After the experimental group completed the 8-week long iCBT intervention, the control group undertook the same intervention. Intervention effects were, therefore, evaluated in two independent groups at two time points. The primary outcome was a change in tinnitus distress between the groups as assessed by the Tinnitus Functional Index. Secondary assessment measures were included for insomnia, anxiety, depression, hearing disability, hyperacusis, cognitive failures, and satisfaction with life. These were completed at baseline, postintervention, and at a 2-month postintervention follow-up. Results: After undertaking the iCBT intervention, the experimental group had a greater reduction in tinnitus distress when compared with the control group. This reduction was statistically significant (Cohens d = 0.7) and was clinically significant for 51% of the experimental group and 5% of the control group. This reduction was evident 4 weeks after commencing the iCBT intervention. Furthermore, the experimental group had a greater reduction in insomnia, depression, hyperacusis, cognitive failures, and a greater improvement in quality of life, as evidenced by the significant differences in these assessment measures postintervention. Results were maintained 2 months postintervention. Conclusions: Guided (using audiological support) iCBT for tinnitus resulted in statistically significant reductions in tinnitus distress and comorbidities (insomnia, depression, hyperacusis, cognitive failures) and a significant increase in quality of life. These effects remained stable at 2-months postintervention. Further trials to determine the longer term efficacy of ICBT to investigate predictors of outcome and to compare iCBT with standard clinical care in the United Kingdom are required. Anglia Ruskin Univ, England; Anglia Ruskin Univ, England. Purpose: The primary aim of this study was to identify coping strategies used to manage problematic tinnitus situations. A secondary aim was to determine whether different approaches were related to the level of tinnitus distress, anxiety, depression, and insomnia experienced. Materials and methods: A cross-sectional survey design was implemented. The study sample was adults interested in undertaking an Internet-based intervention for tinnitus. Self-reported measures assessed the level of tinnitus distress, depression, anxiety, and insomnia. An open-ended question was used to obtain information about how problematic tinnitus situations were dealt with. Responses were investigated using qualitative content analysis to identify problematic situations. Further data analysis comprised of both qualitative and quantitative methods. Results: There were 240 participants (137 males, 103 females), with an average age of 48.16 years (SD: 22.70). Qualitative content analysis identified eight problematic tinnitus situations. Participants had either habituated to their tinnitus (7.9%), used active (63.3%), or passive (28.8%) coping styles to manage these situations. Those who had habituated to tinnitus or used active coping strategies had lower levels of tinnitus distress, anxiety, and depression. Conclusions: The main problematic tinnitus situations for this cohort were identified. Both active and passive coping styles were applied to approach these situations. The coping strategies used most frequently and utilised in the widest range of problematic situations were using sound enrichment and diverting attention. Anglia Ruskin Univ, England; NIHR, England; Univ Nottingham, England. Linköping University, Department of Clinical and Experimental Medicine, Division of Neuroscience. Linköping University, Faculty of Health Sciences. Linköping University, The Swedish Institute for Disability Research. Acquired hearing impairment is associated with gradually declining phonological representations. According to the Ease of Language Understanding (ELU) model, poorly defined representations lead to mismatch in phonologically challenging tasks. To resolve the mismatch, reliance on working memory capacity (WMC) increases. This study investigated whether WMC modulated performance in a phonological task in individuals with hearing impairment. A visual rhyme judgment task with congruous or incongruous orthography, followed by an incidental episodic recognition memory task, was used. In participants with hearing impairment, WMC modulated both rhyme judgment performance and recognition memory in the orthographically similar non-rhyming condition; those with high WMC performed exceptionally well in the judgment task, but later recognized few of the words. For participants with hearing impairment and low WMC the pattern was reversed; they performed poorly in the judgment task but later recognized a surprisingly large proportion of the words. Results indicate that good WMC can compensate for the negative impact of auditory deprivation on phonological processing abilities by allowing for efficient use of phonological processing skills. They also suggest that individuals with hearing impairment and low WMC may use a non-phonological approach to written words, which can have the beneficial side effect of improving memory encoding. Readers will be able to: (1) describe cognitive processes involved in rhyme judgment, (2) explain how acquired hearing impairment affects phonological processing and (3) discuss how reading strategies at encoding impact memory performance. Region Östergötland, Center of Paediatrics and Gynaecology and Obstetrics, Department of Gynaecology and Obstetrics in Linköping. Background: To assess the impact of 10 years of simulation-based shoulder dystocia training on clinical outcomes, staff confidence, management, and to scrutinize the characteristics of the pedagogical practice of the simulation training. Methods: In 2008, a simulation-based team-training program (PROBE) was introduced at a medium sized delivery unit in Linkoping, Sweden. Data concerning maternal characteristics, management, and obstetric outcomes was compared between three groups; prePROBE (before PROBE was introduced, 2004-2007), early postPROBE (2008-2011) and late postPROBE (2012-2015). Staff responded to an electronic questionnaire, which included questions about self-confidence and perceived sense of security in acute obstetrical situations. Empirical data from the pedagogical practice was gathered through observational field notes of video-recordings of maternity care teams participating in simulation exercises and was further analyzed using collaborative video analysis. Results: The number of diagnosed shoulder dystocia increased from 0.9/1000 prePROBE to 1.8 and 2.5/1000 postPROBE. There were no differences in maternal characteristics between the groups. The rate of brachial plexus injuries in deliveries complicated with shoulder dystocia was 73% prePROBE compared to 17% in the late postPROBE group (p amp;gt; 0.05). The dominant maneuver to solve the shoulder dystocia changed from posterior arm extraction to internal rotation of the anterior shoulder between the pre and postPROBE groups. The staff questionnaire showed how the majority of the staff (48-62%) felt more confident when handling a shoulder dystocia after PROBE training. A model of facilitating relational reflection adopted seems to provide ways of keeping the collaboration and learning in the interprofessional team clearly focused. Conclusions: To introduce and sustain a shoulder dystocia training program for delivery staff improved clinical outcome. The impaired management and outcome of this rare, emergent and unexpectedly event might be explained by the learning effect in the debriefing model, clearly focused on the team and related to daily clinical practice. Afasi är ett samlingsbegrepp för förvärvade språkstörningar. Symptomen förekommer i högst utsträckning hos personer som drabbats av en stroke. Personer som drabbats av någon typ av afasi upplever ofta att deras kommunikativa förmåga i hög grad begränsats efter insjuknandet, vilket kan leda till stor eller mycket stor social inskränkning. De psykologiska faktorer som påverkas rör kognition och emotion, sociala strukturer och relationer, exempelvis personliga relationer, yrkesliv eller utbildning. Därmed relateras de psykosociala konsekvenserna av afasi till hur tillståndet påverkar vardagslivet och förmågan till att interagera med den sociala omgivningen. Idag sker intervention för personer med afasi individuellt och gruppintervention eller anhöriginkludering ses som sekundära komplement till den individuella behandlingen. I föreliggande studie undersöks en form av intervention där personer med afasi och deras anhöriga gemensamt får ta del av kommunikativa strategier och tilldelas individanpassade råd. Inspiration har hämtats från den befintliga anhöriginkluderande metoden SPPARC. Målet med interventionen var att den vardagliga kommunikationen skulle stärkas och effektiviseras. Interventionen skedde under en fem veckor lång period och utförts i två olika konstellationer; ett par och en mindre grupp. Gruppkonstellationerna har sedan jämförts med varandra. Båda koncepten utvärderades positivt av deltagarna. De förefaller därmed vara tillämpbara båda två, men beroende på deltagarnas förväntningar och psykosociala behov kan något av koncepten föredras på ett individuellt plan. Resultatet av föreliggande studie indikerar att det finns ett psykosocialt behov av att inkludera anhöriga till personer med afasi i intervention. En ökad medvetenhet hos deltagarna noterades gällande flera av de kommunikativa strategier som behandlades under interventionens gång, bland annat gestikulering och prompting. Deltagarnas utvärdering av perioden tyder på att interventionsperioden framför allt fungerat som ett forum för samtal kring afasi och kommunikation och att den har tjänat sitt syfte i psykosocial bemärkelse. Deltagarna uppgav dock under utvärderingen att interventionen är bäst lämpad som en tidig insats efter personen med afasis insjuknande. University of Toronto, Department of Psychology . Baycrest Health Sciences, Rotman Research Institute. There are well-known age-related declines in hearing, cognition and social participation. Furthermore, previous studies have shown that hearing loss is associated with both cognitive decline and increased risk for social isolation and that engagement in social leisure activities is related to cognitive decline. However, it is unclear how the three concepts and age relate to each other. In the current study, behavioral measures of hearing and memory were examined in relation to self-reported participation in social leisure activities. Data from two different samples were analyzed with structural equation modeling. The first consisted of 297 adults from Umeå, Sweden, who participated in the Betula longitudinal study. The second consisted of 273 older adults who volunteered for lab-based research on aging in Toronto, Canada. Structural equation modeling yielded two models with similar statistical properties for both samples. The first model suggests that age contributes to both hearing and memory performance, hearing contributes to memory performance, and memory (but not hearing) contributes to participation in social leisure activities. The second model also suggests that age contributes to hearing and memory performance and that hearing contributes to memory performance, but that age also contributes to participation in social leisure activities, which in turn contributes to memory performance. The models were confirmed in both samples, indicating robustness in the findings, especially since the samples differed on background variables such as years of education and marital status. Few participants in both samples were candidates for hearing aids, but most of those who were candidates used them. This suggests that even early stages of hearing loss can increase demands on cognitive processing that may deter participation in social leisure activities. Department of Womens and Childrens Health, Karolinska Institutet, Stockholm, Sweden. Region Östergötland, Local Health Care Services in Central Östergötland, Department of Child and Adolescent Habilitation. To develop the Mini-Manual Ability Classification System (Mini-MACS) and to evaluate the extent to which its ratings are valid and reliable when children younger than 4 years are rated by their parents and therapists. The effect of lean production on conditions for learning is debated. This study aimed to investigate how tools inspired by lean production (standardization, resource reduction, visual monitoring, housekeeping, value flow analysis) were associated with an innovative learning climate and with collective dispersion of ideas in organizations, and whether decision latitude contributed to these associations. A questionnaire was sent out to employees in public, private, production and service organizations (n = 4442). Multilevel linear regression analyses were used. Use of lean tools and decision latitude were positively associated with an innovative learning climate and collective dispersion of ideas. A low degree of decision latitude was a modifier in the association to collective dispersion of ideas. Lean tools can enable shared understanding and collective spreading of ideas, needed for the development of work processes, especially when decision latitude is low. Value flow analysis played a pivotal role in the associations. Ball, MartinLinköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences.Crystal, DavidUniversity of Bangor, UK. Conventional electronic devices have evolved from the first transistors introduced in the 1940s to integrated circuits and today's modern (CMOS) computer chips fabricated on silicon wafers using photolithography. This chapter reviews such iontronic devices for signal translation and their application in bioelectronics. It begins with a brief description of the ion transport mechanisms that lay the conceptual groundwork for this type of iontronic devices. The chapter presents various iontronic devices aimed at bioelectronic applications. It outlines the future possible developments of iontronics for human-machine interfacing. The physical interface between electronic devices and biological tissues is of particular interest, as this interface bridges the gap between artificial, humanmade technologies and biological "circuits". Ion-conducting diodes and transistors can be used to build circuits for modulation of ion flow, with the possibility of mimicking the dynamic and nonlinear processes occurring in the body. This study aims to understand patterns in the social representation of hearing loss reported by adults across different countries and explore the impact of different demographic factors on response patterns. The study used a cross-sectional survey design. Data were collected using a free association task and analysed using qualitative content analysis, cluster analysis and chi-square analysis. The study sample included 404 adults (18 years and over) in the general population from four countries (India, Iran, Portugal and UK). The cluster analysis included 380 responses out of 404 (94.06%) and resulted in five clusters. The clusters were named: (1) individual aspects; (2) aetiology; (3) the surrounding society; (4) limitations and (5) exposed. Various demographic factors (age, occupation type, education and country) showed an association with different clusters, although country of origin seemed to be associated with most clusters. The study results suggest that how hearing loss is represented in adults in general population varies and is mainly related to country of origin. These findings strengthen the argument about cross-cultural differences in perception of hearing loss, which calls for a need to make necessary accommodations while developing public health strategies about hearing loss. HEARing Cooperative Research Centre, Melbourne, Australia. University of Melbourne, Carlton, Australia. Objective: Patient-centred care is a term frequently associated with quality health care. Despite extensive literature from a range of health-care professions that provide description and measurement of patient-centred care, a definition of patient-centredness in audiological rehabilitation is lacking. The current study aimed to define patient-centred care specific to audiological rehabilitation from the perspective of older adults who have owned hearing aids for at least one year. Design: Research interviews were conducted with a purposive sample of older adults concerning their perceptions of patient-centredness in audiological rehabilitation, and qualitative content analysis was undertaken. Study sample: The participant sample included ten adults over the age of 60 years who had owned hearing aids for at least one year. Results: Data analysis revealed three dimensions to patient-centred audiological rehabilitation: the therapeutic relationship, the players (audiologist and patient), and clinical processes. Individualised care was seen as an overarching theme linking each of these dimensions. Conclusions: This study reported two models: the first model describes what older adults with hearing aids believe constitutes patient-centred audiological rehabilitation. The second provides a guide to operationalised patient-centred care. Further research is required to address questions pertaining to the presence, nature, and impact of patient-centred audiological rehabilitation. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Eriksholm Research Centre, Snekkersten, Denmark. Objective: This discussion paper aims to synthesise the literature on patient-centred care from a range of health professions and to relate this to the field of rehabilitative audiology. Through review of the literature, this paper addresses five questions: What is patient-centred care? How is patient-centred care measured? What are the outcomes of patient-centred care? What are the factors contributing to patient-centred care? What are the implications for audiological rehabilitation? Design: Literature review and synthesis. Study sample: Publications were identified by structured searches in PubMed, Cinahl, Web of Knowledge, and PsychInfo, and by inspecting the reference lists of relevant articles. Results: Few publications from within the audiology profession address this topic and consequently a review and synthesis of literature from other areas of health were used to answer the proposed questions. Conclusion: This paper concludes that patient-centred care is in line with the aims and scope of practice for audiological rehabilitation. However, there is emerging evidence that we still need to inform the conceptualisation of patient-centred audiological rehabilitation. A definition of patient-centred audiological rehabilitation is needed to facilitate studies into the nature and outcomes of it in audiological rehabilitation practice. Natl Tech Univ Athens, Greece. Inst Publ Hlth Osijek Baranya Cty, Croatia. Purpose: The scarcity of health care resources calls for their rational allocation, including within hearing health care. Policies define the course of action to reach specific goals such as optimal hearing health. The process of policy making can be divided into 4 steps: (a) problem identification and issue recognition, (b) policy formulation, (c) policy implementation, and (d) policy evaluation. Data and evidence, especially Big Data, can inform each of the steps of this process. Big Data can inform the macrolevel (policies that determine the general goals and actions), mesolevel (specific services and guidelines in organizations), and microlevel (clinical care) of hearing health care services. The research project EVOTION applies Big Data collection and analysis to form an evidence base for future hearing health care policies. Method: The EVOTION research project collects heterogeneous data both from retrospective and prospective cohorts (clinical validation) of people with hearing impairment. Retrospective data from clinical repositories in the United Kingdom and Denmark will be combined. As part of a clinical validation, over 1,000 people with hearing impairment will receive smart EVOTION hearing aids and a mobile phone application from clinics located in the United Kingdom and Greece. These clients will also complete a battery of assessments, and a subsample will also receive a smartwatch including biosensors. Big Data analytics will identify associations between client characteristics, context, and hearing aid outcomes. Results: The evidence EVOTION will generate is relevant especially for the first 2 steps of the policy-making process, namely, problem identification and issue recognition, as well as policy formulation. EVOTION will inform microlevel, mesolevel, and macrolevel of hearing health care services through evidence-informed policies, clinical guidelines, and clinical care. Conclusion: In the future, Big Data can inform all steps of the hearing health policy-making process and all levels of hearing health care services. Previous studies have shown that children with cochlear implant (CI) have worse word fluency abilities and analogical reasoning abilities compared to normal-hearing children. There is a relationship between language and analogical reasoning. However, a possible relationship between word fluency and analogical reasoning has not been studied before among children with CI or among normal-hearing children. This warrants the present study, which aimed to examine if there are differences between children with CI and normal-hearing children regarding word fluency and analogical reasoning. The study also aimed to examine the relationship between word fluency and analogical reasoning in children with CI and normal-hearing children. The present study involved nine children with CI aged 6;4–8;2 years and thirty normal-hearing children aged 6;1–7;1 years. Word fluency was examined using the phonological word fluency test FAS and the semantic word fluency test Animal. Visual analogical reasoning was examined using AnimaLogica and verbal analogical reasoning using Spoken Analogies from Illinois Test of Psycholinguistic Abilities-3 (ITPA-3). The results of the present study show that the children with CI had poorer word fluency ability and analogical reasoning compared to the normal-hearing children. A relationship between semantic word fluency and verbal analogical reasoning in normal-hearing children was found, with the children with CI showing the same trend. Word fluency ability and analogical reasoning and their relationship has a clinical relevance for speech-language pathologists since this must be considered when investigating and treating language difficulties in children with CI as well as normal-hearing children. Natl Inst Hlth Res NIHR, England; Univ Nottingham, England. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Lamar State Univ, TX USA; Manipal Univ, India; All India Inst Speech and Hearing, India. All India Inst Speech and Hearing, India. Objectives: To raise awareness and propose a good practice guide for translating and adapting any hearing-related questionnaire to be used for comparisons across populations divided by language or culture, and to encourage investigators to publish detailed steps. Design: From a synthesis of existing guidelines, we propose important considerations for getting started, followed by six early steps: (1) Preparation, (2, 3) Translation steps, (4) Committee Review, (5) Field testing and (6) Reviewing and finalising the translation. Study sample: Not applicable. Results: Across these six steps, 22 different items are specified for creating a questionnaire that promotes equivalence to the original by accounting for any cultural differences. Published examples illustrate how these steps have been implemented and reported, with shared experiences from the authors, members of the International Collegium of Rehabilitative Audiology and TINnitus research NETwork. Conclusions: A checklist of the preferred reporting items is included to help researchers and clinicians make informed choices about conducting or omitting any items. We also recommend using the checklist to document these decisions in any resulting report or publication. Following this step-by-step guide would promote quality assurance in multinational trials and outcome evaluations but, to confirm functional equivalence, large-scale evaluation of psychometric properties should follow. University of Hong Kong, Pokfulam. Three questions are addressed: 1) What is Evidence-Based Practice (EBP) and why is it important for adults with hearing impairment? 2) What is the evidence about intervention options for adults who fail a hearing screening and are identified with hearing impairment? 3) What intervention options do adults choose when identified with hearing impairment for the first time? The five steps of the EBP process are discussed in relation to a clinical question about whether hearing aids and communication programs reduce activity limitations and participation restrictions compared to no treatment for adults who fail a hearing screening and are identified with hearing impairment. Systematic reviews of the evidence indicate that both hearing aids and communication programs reduce activity limitations and participation restrictions for this population and are therefore appropriate options. A study is then described in which these options were presented to 153 clients identified with hearing impairment for the first time: 43% chose hearing aids, 18% chose communication programs and the remaining 39% chose not to take any action. EBP supports the offer of intervention options to adults who fail a hearing screening and are identified with hearing impairment. Introduction: Surgical patients need knowledge to participate in their own care and to engage in self-care behaviour in the perioperative period which is important for their recovery. Patient education facilitates such knowledge acquisition and several methods can be used to facilitate it, for example, face-to-face education and brochures or using information technology such as website or computer games. Healthcare professionals have been slow to seize the possibilities that information technology has to offer within the field, including the use of serious games. To optimise patient education, the information is needed on the patients’ needs and preferences and what they think about the idea of using a serious game to learn about self-care. Aim: The overall aims of this thesis were to describe the knowledge expectations of surgical patients, to describe how surgical patients want to learn, and to explore the potential use of serious games in patient education. Methods: This thesis includes four studies that used both quantitative and qualitative data to describe aspects of patient learning in relation to surgery. Study I has a prospective and comparative design with survey data collected before surgery and before hospital discharge from 290 patients with osteoarthritis undergoing knee arthroplasty. Data was collected on fulfilment of knowledge expectations and related factors. Study II is a cross-­‐sectional study in 104 patients with heart failure who had been scheduled for cardiac resynchronisation therapy (CRT) device implantation. Data was collected on knowledge expectations and related factors. In Study III the perceptions of 13 surgical patients towards novel and traditional methods to learn about post-operative pain management are explored in a qualitative interview study using content analysis. Study IV describes the development and evaluation of a serious game to learn about pain management with the participation of 20 persons recruited from the public. The game was developed by an interdisciplinary team following a structured approach. Data on the efficacy and usability of the game was collected in one session with questionnaires, observations and interviews. Results: Participants reported high knowledge expectations. Knowledge expectations were highest within the bio-physiological knowledge dimension on disease, treatment and complications and the functional dimension on how daily activities are affected, both of which include items on self-care. Most participants wanted to know about the possible complications related to the surgery procedure. In none of the knowledge dimensions the expectations of participants were fulfilled. Participants received most knowledge on the physical and functional issues and received least on the financial and social aspects of their illness. The main predictor of fulfilment of knowledge expectations was having access to knowledge in the hospital from doctors and nurses. Trust in the information source and own motivation to learn shaped how the participants thought about different learning methods. Although the participants were open to using novel learning methods such as websites or games they were also doubtful about their use and called for advice by healthcare professionals. To develop a serious game with the goal to learn about pain management, theories of self-care and adult learning, evidence on the educational needs of patients about pain management and principles of gamification were found useful. The game character is a surgical patient just discharged home from hospital who needs to attend to daily activities while simultaneously managing post-operative pain with different strategies. Participants who evaluated a first version of the serious game improved their knowledge and described usability of the game as high. They were positive towards this new learning method and found it suitable for learning about pain management after surgery in spite of some technical obstacles. Conclusions: Surgical patients have high knowledge expectations about all aspects of their upcoming surgery and although they prefer direct communication with healthcare professionals as a source for knowledge they might be open to try using more novel methods such as games. Preliminary short-­‐term results demonstrate that a serious game can help individuals to learn about pain management, and has the potential to improve knowledge. A careful introduction, recommendation, and support from healthcare professionals is needed for implementation of such a novel method in patient education. University of Akureyri, Iceland . AIMS AND OBJECTIVES: To describe the possible differences between knowledge expectations and received knowledge of patients undergoing elective knee arthroplasty in Iceland, Sweden and Finland and also to determine the relationship between such a difference and both background factors and patient satisfaction with care. BACKGROUND: Knee arthroplasty is a fast-growing and a successful treatment for patients with osteoarthritis. Patient education can improve surgery outcomes, but it remains unknown what knowledge patients expect to receive and actually acquire during the perioperative period and what factors are related to that experience. METHODS: In total, 290 patients answered questionnaires about their expectations (Knowledge Expectations of hospital patients - scale) before surgery and about received knowledge (Received Knowledge of hospital patients - scale) and satisfaction with hospital care (Patient Satisfaction Scale) at discharge. Sociodemographics, clinical information, accessibility to knowledge from healthcare providers (Access to Knowledge Scale), and preferences for information and behavioural control (Krantz Health Opinion Survey) were collected as background data. RESULTS: Patients` knowledge expectations were higher (mean 3·6, SD 0·4) than their perception of received knowledge (mean 3·0, SD 0·7). Multiple linear regression analysis showed that access to knowledge, information preferences and work experience within health- or social care explained 33% (R²) of the variation in the difference between received and expected knowledge. Patients reported high satisfaction with their care except regarding how their family was involved. CONCLUSION: Patients undergoing knee arthroplasty receive less knowledge than they expect, and individual factors and communication with healthcare providers during hospitalisation are related to their experience. The content of patient education and family involvement should be considered in future care. RELEVANCE TO CLINICAL PRACTICE: The results strengthen the knowledge base on the educational needs of knee arthroplasty patients and can be used to develop and test new interventions. 3. Perceptions about traditional and novel methods to learn about post-operative pain management: - a qualitative study. Open this publication in new window or tab >>Perceptions about traditional and novel methods to learn about post-operative pain management: - a qualitative study. Linköping University, Faculty of Medicine and Health Sciences. Linköping University, Department of Social and Welfare Studies, Division of Nursing Science. University of Iceland, Reykjavik, Iceland. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Heart and Medicine Center, Department of Cardiology in Linköping. Linköping University, Department of Medical and Health Sciences, Division of Nursing Science. Aim: To explore the perceptions of surgical patients about traditional and novel methods to learn about post-operative pain management.Background: Patient education is an important part of post-operative care. Contemporary technology offers new ways for patients to learn about self-care, although face-to-face discussions and brochures are the most common methods of delivering education in nursing practice.Design: A qualitative design with a vignette and semi-structured interviews used for data collection.Methods: A purposeful sample of 13 post-surgical patients, who had been discharged from hospital, was recruited during 2013 - 2014. The patients were given a vignette about anticipated hospital discharge after surgery with four different options for communication (face-to-face, brochure, website, serious game) to learn about post-operative pain management. They were asked to rank their preferred method of learning and thereafter to reflect on their choices. Data were analysed using an inductive content analysis approach.Findings: Patients preferred face-to-face education with a nurse, followed by brochures and websites, while games were least preferred. Two categories, each with two sub-categories, emerged from the data. These conceptualised the factors affecting patients' perceptions: 1) 'Trusting the source', sub-categorised into 'Being familiar with the method' and 'Having own prejudgments'; and 2) 'Being motivated to learn' sub-categorised into 'Managing an impaired cognition' and 'Aspiring for increased knowledge'.Conclusion: In order to implement successfully novel educational methods into post-operative care, healthcare professionals need to be aware of the factors influencing patients' perceptions abouthow to learn, such as trust and motivation. Department of logopedics, phoniatrics and audiology, Lund University, Sweden and Institutet för handikappvetenskap (IHV), The Swedish Institute for Disability Research. Objective: In a clinical setting, theories of health behaviour change could help audiologists and other hearing health care professionals understand the barriers that prevent people with hearing problems to seek audiological help. The transtheoretical (stages of change) model of health behaviour change is one of these theories. It describes a persons journey towards health behaviour change (e.g. seeking help or taking up rehabilitation) in separate stages: precontemplation, contemplation, preparation, action, and, finally, maintenance. A short self-assessment measure of stages of change may guide the clinician and facilitate first appointments. This article describes correlations between three stages of change measures of different lengths, one 24-item and two one-item. Design: Participants were recruited through an online hearing screening study. Adults who failed the speech-in-noise recognition screening test and who had never undergone a hearing aid fitting were invited to complete further questionnaires online, including the three stages of change measures. Study sample: In total, 224 adults completed the three measures. Results: A majority of the participants were categorised as being in one of the information- and help-seeking stage of change (contemplation or preparation). The three stages of change measures were significantly correlated. Conclusions Our results support further investigating the use of a one-item measure to determine stages of change in people with hearing impairment. In institutional interactions such as conversations between a speech and language therapist, a person closely related to a person with aphasia and the individual with aphasia there is an asymmetry considering the power. The asymmetry arising in institutional interactions may mean that the participant with the least power will experience a face threatening act. Understanding is seen as a dynamic process and when understanding is a problem in the conversation the ongoing activity is disturbed. The receiver can solve the problem by giving the speaker a candidate understanding. How these strategies are used in conversations between a speech and language therapist with a person closely related to a person with aphasia is a relatively unexplored field and an important area which is a common for speech therapists. The aim of the present study was to investigate a number of communication strategies in the conversation with a person closely related to a person with aphasia; how understanding was reached and how face threatening acts were reduced when the speech therapists delivered test results and gave counseling. Three conversations between speech and language therapists, persons closely related to a person with aphasia and in two of the recordings the person with aphasia were recorded, transcribed and analyzed according to principles of Conversation Analysis (CA). Two speech and language therapists, three persons closely related to a person with aphasia and two persons with aphasia participated in the study. In total, the recorded material is one hour and 37 minutes. Participating speech and language therapists also filled in a questionnaire. Strategies for mitigation and understanding were identified. The strategies were divided into two categories; strategies to mitigate FTA:s when delivering the test results and counseling, the other categorie was the use of candidate understandings for gaining an mutual understanding. The study revealed that candidate understandings were often initiated by the person closely related to a person with aphasia. The study also revealed that the test results with positive outcome where not mitigated and often emphasized and test results that could be perceived as negative were mitigated with hedging.
2019-04-20T17:47:09Z
http://liu.diva-portal.org/smash/resultList.jsf?af=%5B%5D&aq=%5B%5B%7B%22categoryId%22%3A%2211699%22%7D%5D%5D&aqe=%5B%5D&aq2=%5B%5B%5D%5D&language=en&query=
As we move into the .NET programming environment from traditional Windows programming models we need to adjust the way we think about writing classes. In particular we need to rethink our approach to destructors, as their role and their behaviour in the managed world of the Common Language Runtime (CLR) differs from what we are used to. This article explains the new role for destructors, how they are considered from the CLR's point of view, what you should and shouldn't do in a destructor and how to follow the guidelines. A lot of the information here is targeted at component writers, but may well be useful to application programmers as well. You will see that the use of finalization code for objects (such as destructors) is a much rarer situation in .NET than it is in Win32 or Linux. To exemplify how things are done in the unmanaged world of Win32 programming, both C++ and Delphi syntax will be employed (the Delphi language is used by Borland Delphi on Win32 platforms and Borland Kylix on Linux platforms). To demonstrate how things are done in the managed .NET arena, C# and Borland Delphi for .NET syntax will be used. At the time of writing Delphi for .NET is currently a beta test release and so implementation details discussed in this article are subject to change in the commercial product release. When you write a class in Delphi or C++ you work on the basis that the programmer has to construct an instance of your class, and the responsible programmer will then explicitly destruct the instance when they are done with it. Destructing the instance involves invoking the objects destructor, which proceeds to free up any specific resources used by the object, such as blocks of memory, other objects and OS resources, and also frees the memory occupied by the objects instance data. If you have a hierarchy of classes, some or all of them may define specific destructors to tidy up resources that each individual class makes use of. When you destroy an object, its destructor frees its resources and then chains back to the ancestor classs destructor, which chains back to its ancestor class destructor and so on up to the base class. When all destructors have been called the instance data memory is freed. Since any type of resource in Windows almost certainly involves the use of a handle to represent it (such as a window handle, a bitmap handle, a registry key handle, a file handle and so on), lets look at a simple class that wraps up the management of an arbitrary Windows handle. In truth we might inherit a variety of real classes from this one base class, but for simplicity well stick to just using the one class. The use of __finally in this C++ snippet is not ANSI C++ compliant, but both Microsofts and Borlands C++ compilers implement this keyword to make resource protection that much easier. 50 lines, 0.09 seconds, 12108 bytes code, 1817 bytes data. This process of explicitly destroying an object when its use has ended is referred to as deterministic destruction and is very common in object-oriented programming languages. Programmers using both Delphi and C++ programming languages consider it the norm. Unfortunately deterministic destruction is the cause of a worryingly large number of application bugs during (and after) application development, simply because the responsibility is with the programmer to destroy an object and to destroy it at an appropriate point. Due to human error, it is common for objects to not be destroyed at all, yielding memory leaks. It is also common for an object to be destroyed and then later referred to by some code, potentially giving Access Violations or data corruption. Both these problems can be difficult to track down, unlike many normal logic problems. Fortunately for us developers, these headaches are removed by .NET which dispenses with the requirements for deterministic destruction. The CLR uses garbage collection to avoid common application bugs such as those described above. The programmer no longer has the responsibility to destruct objects when they are finished with; in fact it is not possible for a programmer to destruct an object because the conventional notion of the destructor has gone. .NET objects are all allocated on a managed heap. When objects are no longer referenced by any variables in an application (objects that are then unreachable by any code), they are clearly in need of disposal. However the programmer need do nothing to ensure that this will occur. Instead of this being the programmers responsibility, this is in the remit of the garbage collector. The next time a garbage collection sweep of the managed heap takes place all unreachable objects will be identified, their memory reclaimed and the managed heap compacted. When the garbage collector will actually do this is tricky to predict in real time, however the algorithm it uses is well documented. You can also force a garbage collection sweep if need be, though such requirements are quite rare. The point established here is that without any programmer intervention the instance data space occupied by any .NET object will be automatically freed after it has been finished with. This means that one part of the job of the destructor has been completely dispensed with. If the object has references to other objects, the traditional destructor would need to destruct these as well. This is no longer necessary in .NET as the garbage collector will reclaim their instance memory at some point. Any unmanaged resources, such as raw database connections and file handles, and so on, will still need to be cleaned up and the garbage collectors helpful data space reclamation does not cater for this side of things, in itself. To be clear, it is not that common for .NET classes to directly make use of unmanaged resources. It is more common for them to use other managed objects in the .NET FCL (Framework Class Library) that already take the responsibility for dealing with unmanaged resources (unmanaged resource wrapper objects). For example, files are usually represented by instances of the System.IO.FileStream class, and database connections might well be represented by instances of System.Data.SqlClient.SqlConnection or System.Data.OleDb.OleDbConnection. It is typically component writers who have to worry about accessing and using unmanaged resources directly, as most standard cases are covered by existing FCL classes. Once in a while, however, you might be in a situation where you need to know how to correctly deal with disposing of an unmanaged resource. Fortunately, all .NET objects offer an opportunity to ensure that any unmanaged resources are properly disposed of through their Finalize() method. Application programmers do have their own issues that need to be looked at, since you will often be working with managed objects that control unmanaged resources (such as the file and connection classes) and you may wish to free up these resources ahead of the point that the garbage collector will wish to do this. We'll address this issue shortly. The base class in the .NET environment, System.Object, defines a protected virtual method call Finalize(). The System.Object implementation of Finalize() is a placeholder and does nothing, but this method can be overridden by any class that requires an opportunity to do unmanaged resource cleanup. A class that overrides Finalize() is sometimes described as having a finalizer, or being a finalizable object. When the garbage collector determines an object is unreachable, it will check to see if it has a finalizer and if so, will call the finalizer before reclaiming the objects memory (the actual implementation of this is discussed later). This now allows a managed .NET object to offer the same clean-up behaviour as attained with an unmanaged objects destructor. The garbage collector invokes the finalizer to free specific unmanaged resources and then proceeds to reclaim the instance data memory. Remember that the requirement for a finalizer is quite rare and only applies when you have unmanaged resources that need to be freed. The primary problem we face with a finalizer is that we still have no control over when the garbage collector will come along and tidy up our object (call the finalizer, if present, and also reclaim the object's memory). If the resource held by the object should be freed promptly (such as an unmanaged database connection), simply overriding the Finalize() method wont help us out. This whole issue is described by the term non-deterministic finalization. Finalization of the object will occur, but not when the programmer wants it to; instead it occurs when the garbage collector gets around to it. Since Finalize() is protected, it cannot be called directly by a consumer of the object to overcome this problem. However you could implement a public method that calls Finalize() and use that to permit consumers to free up the objects resources at a specific point in time. If you were to do this you would then need to tell the garbage collector that it does not need to call the finalizer when collecting the object. This approach is not encouraged though for a couple of reasons. Firstly, there is a formal mechanism designed to help support deterministic finalization, which we will look at later: the dispose pattern. Secondly, in C# there is no way to explicitly call the finalizer from any other method, as we will see in the next section. Of course we could overcome this issue by implementing a public method that freed the unmanaged resources, and was called from the finalizer with appropriate checks, but again, the formal approach is preferred. Another problem is that you cannot guarantee the order in which object finalizers will execute, even when one object holds a reference to another object and they both have finalizers. This means that finalizers should never access other finalizable objects (objects without finalizers, however, are just fine); they should generally be restricted to simply freeing the unmanaged resource. Whilst System.Object defines the Finalize() method, it is not possible to directly override it in a C# program. Instead, the designers of the language chose to enforce a specific piece of custom syntax for the job. In order to write a finalizer method in C# you use the same syntax as a C++ destructor does. In fact the C# language uses the term destructor for its finalizers, which has been a source of confusion for developers migrating to C# from C++. It was arguably a poor choice for the language designers to call a C# finalizer a destructor, particularly when C++ destructors are encouraged but C# destructors are discouraged. So in C#, what looks like a destructor and is called a destructor is in truth a finalizer, as far as the CLR is concerned. When C# compiles a destructor, the CIL code emitted will turn it into an override of the protected virtual Finalize() method. This means that, unlike a C++ destructor, which is explicitly invoked by the programmer through the use of the delete operator (or implicitly invoked when a stack-based object goes out of scope), a C# destructor is invoked implicitly by the garbage collector at some undetermined point after the object has become unreachable. Additionally, unlike a C++ destructor a C# destructor does not free up the instance data memory (this occurs at some point after the destructor executes, as will be explained later). Copyright (C) Microsoft Corporation 2001. All rights reserved. You should be aware that there are overheads with finalizers and Microsoft strongly advises against implementing destructors in C#. Their coding guideline is: If you can do the same thing without a C# destructor, do it. In the case of Delphi for .NET you are free to override Finalize() in your classes and, if you felt the need, to declare a public method to expose the Finalize() method to your object consumers (again, you should typically use the dispose pattern rather than do this). 50 lines, 0.17 seconds, 6388 bytes code, 0 bytes data. Notice that there is no call to BR.Free as in the unmanaged version, however you could put one in and it would have no effect at all on the code. Well come back to look at the Delphi for .NET Free() method later. In truth, the operation of the garbage collector is more involved than how it has been described so far. This section explores the garbage collectors modus operandi in more depth to clarify certain issues that arise. To encourage runtime efficiency the garbage collector uses generations. Generations are logical divisions of the managed heap and the CLR uses three generations: generation 0, generation 1 and generation 2. New objects are always allocated from the generation 0 portion of the heap. When an object is allocated and generation 0 is too full to accommodate it, the garbage collector will start a generation 0 sweep looking for unreachable objects in generation 0 and reclaiming their memory. Any reachable objects are then promoted to the generation 1 heap area, thereby leaving generation 0 empty. Any objects that get moved around in memory through this generational promotion have all their references in the application updated to reflect their new address. This means you cannot assume an object will remain where it started out throughout its life; it may get moved by the garbage collector. However if necessary you can pin an object in place so it will not be moved (using the C# fixed statement or the System.Runtime.InteropServices.GCHandle structure's Alloc() and Free() methods). During a garbage collection, whilst generation 0 objects are being promoted to generation 1 it may be the case that generation 1 is too full to accommodate some or all of them. In this case the garbage collector will scan generation 1, reclaiming memory for unreachable objects and promoting reachable objects to generation 2. At some point, whilst promoting objects from generation 1 to generation 2, it may be the case that generation 2 fills up. If so, the garbage collector will scan generation 2 and reclaim memory for unreachable objects to regain some space. Reachable objects in generation 2 remain in generation 2. So generation 0 contains new objects that have not been examined by the garbage collector, generation 1 contains objects that have been examined once by the garbage collector (and were still reachable at that point) and generation 2 contains objects that been examined at least twice (and were still reachable). Newer objects will have short lifetimes. They are allocated in generation 0 and generation 0 is what the garbage collector examines by default. Older objects will have longer lifetimes. They get promoted to generation 1 or generation 2, which are areas of the managed heap that are garbage collected less frequently. It is more efficient to garbage collect a portion of the heap, rather than the whole managed heap, which is why the garbage collector only scans generation 0, by default. Microsoft's own performance tests indicate that it takes between 0 and 10 milliseconds to garbage collect generation 0. Collection of Generation 1 is typically between 10 and 30 milliseconds. It was mentioned earlier that you can force the garbage collector to sweep the managed heap, if you feel it is necessary. This is achieved by calling System.GC.Collect(), but you should be aware that it is not recommended to do this. Firstly, the main reason people invoke the garbage collector is to reduce periodical sluggishness in the application when it occurs naturally. They will invoke the garbage collector during UI operations (or other naturally lengthy processes in the application) so the overhead is not noticeable. However the timing information given above demonstrates that garbage collection is typically not a time consuming operation. However information from Microsoft CLR engineers suggests that the generation 0 and generation 1 thresholds start out at different levels. The generation 0 threshold defaults to the size of the L2 on-chip cache (also called the Level 2 cache, or secondary cache). The initial minimum threshold for generation 1 is about 300kB, whereas the maximum size can be half the segment size, which for the regular single processor workstation GC will amount to 8MB. The plan being that most generation 0 allocations (i.e. managed objects) will live and die entirely on the CPU chip in the very fast L2 cache. The garbage collector, when operating off its own bat, keeps an eye on how the application is allocating memory (through object construction). If necessary it will modify the thresholds of each of the managed heap generations. The second reason for not manually invoking the garbage collector is that this will break its statistical analysis of the program, which it uses to make decisions on any fine-tuning of these thresholds. We already bumped into a couple of issues with finalizers earlier, however there are more details we should know about with regard to the execution of finalizers in order to fully appreciate the situation. When an object with a finalizer is first created the CLR adds a reference to it onto an internal list called the finalization list. This makes it easy for the garbage collector to know which objects require finalization before having their memory reclaimed. Note that this in itself adds a little overhead onto the construction of any objects that have finalizers. When the garbage collector does a sweep of the managed heap and finds objects that are unreachable by program code (which are essentially garbage to be collected), it then checks to see if any of them appear in the finalization list. Any that do need their finalizers called and so cannot have their memory reclaimed just yet. These objects have a reference to them added to another internal list, called the freachable queue (pronounced F Reachable) and their reference in the finalization list is removed. This tells us that finalizable objects slow down the garbage collector, since each garbage collection sweep has to be accompanied by searches through the finalization list (a check is made for each unreachable object to see if it is in the finalization list). The idea of the freachable queue is that since the objects in it need to have a method executed on them (the finalizer), they must still be considered reachable. Because of this these objects are promoted up to the next generation of the heap, thereby morphing from garbage into non-garbage (albeit temporarily), and the garbage collector then reclaims any memory from objects that were unreachable and had no finalizers. So what happens next to these objects that survived the garbage collector? Well, there is a dedicated high priority thread (the finalizer thread) managed by the CLR that keeps an eye on the freachable queue. When the queue is empty the finalizer thread sleeps, but when any objects are added to the queue it gets woken up and starts sequentially calling their finalizers. From this we learn that finalizers are not called on our normal applications thread. The implication of this is that it is important to ensure your finalizer operates as quickly as possible and doesnt do any blocking (waiting for another thread or some other resource to become available), as discussed shortly. As soon as an objects finalizer has been called, the object is removed from the freachable queue and is now truly garbage waiting to be collected. However this will not happen until the next time the garbage collector sweeps the heap generation occupied by the object, which will generally not be generation 0 (since it was promoted when moved from the finalization list to the freachable queue), and so will almost certainly occur later than the very next garbage collection sweep, which will just examine generation 0. This tells us that objects with finalizers live much longer than those without and also that they take at least two garbage collection cycles to have their memory reclaimed. An important consequence of this is that any other objects referenced by the finalizable object (and any objects that those objects refer to, and so on) will also be kept around much longer than you might otherwise expect, since they will remain reachable until the finalizable object is finalized. In fact a finalizable object can also be resurrected during the execution of its finalizer to extend its lifetime even further. A call to System.GC.ReRegisterForFinalize() adds the object passed as a parameter to the finalization list (there are few circumstances that this is beneficial), ensuring that when it is next considered unreachable its finalizer will be called again. Whilst the finalizers are called sequentially for the objects in the freachable queue you cannot predict in what order the objects are placed in the queue (thats dependant on the order in which the garbage collector discovers they are unreachable, which cannot be determined). This means that you cannot predict which order the finalizers are called. Even when one object with a finalizer holds a reference to another object with a finalizer, the two finalizers could be called in either order. This means that a finalizer must not refer to any other objects that have finalizers, using an assumption that a finalizer has or has not been called. In general finalizers should simply free the objects resources and do nothing else. If an unhandled exception occurs in a finalizer the CLRs executing thread will swallow the exception, treat the finalizer as if it completed normally, remove it from the freachable queue and move onto the next entry. More serious though, is what happens if your finalizer doesn't exit for some reason, for example it blocks, waiting for a condition that never occurs. In this case the finalizer thread will be hung, so no more finalizable objects will be garbage collected. You should be very much aware of this situation and stick to writing the simplest code to free your unmanaged resources in finalizers. Finalizable objects are not promoted to higher heap generations during shutdown. Any individual finalizer will have a maximum of 2 seconds to execute; if it takes longer it will be killed off. There is a maximum of 40 seconds for all finalizers to be executed; if any finalizers are still executing, or pending at this point the whole process is abruptly killed off. Calling the garbage collectors Collect() method without any parameters forces it to run across all three generations and reclaim space for all unreachable objects on the managed heap. There is an overloaded version of Collect() that takes an integer parameter to limit which generations to collect from. You can find more information on how the garbage collector works in Jeffrey Richters two-part article on the subject from MSDN Magazine in November and December 2000 (see the Further Reading section). You have written a class that makes use of an unmanaged resource, and it is appropriate that an object consumer can cause the unmanaged resource to be released at a fixed point. In other words you wish to somehow expose your finalizer to the object consumer. This could be described as a class using unmanaged resources.. The first scenario is much more frequent than the second one, although sometimes a given class will fit into both scenarios. In both cases it is quite feasible for the programmer to concoct some scheme to allow deterministic finalization of an object as we saw earlier. However you would be advised instead to follow the mechanism provided for this purpose, which is to implement the dispose pattern. This pattern formally defines how to offer deterministic finalization to an object consumer, giving consistency to developers using your objects. To implement the dispose pattern, your object must implement the System.IDisposable interface. This is a simple interface with only one member, a parameterless method called Dispose() that does not return a value (i.e. a void function, or a procedure). When a class implements IDisposable it makes the Dispose() method publicly available as a means to free up the objects unmanaged resources, be they directly or indirectly used by the object (however the objects memory will still be reclaimed later, by the garbage collector). When examining classes that implement the dispose pattern you may well find they offer an alternative method to do the same job, called Close(). This is merely a convenience to the programmer using the objects, as it often seems more appropriate to close some types of resource (such as files or database connections) than to dispose of them. Note that Close() is not part of the dispose pattern, it is simply an optional alternate entry point to get to the Dispose() method. Typically Close() will simply call Dispose(), giving exactly the same result. When implementing the dispose pattern, the Close() method (if present) should be public and non-virtual and simply call Dispose(). The Dispose() method from the IDisposable interface should free any unmanaged resources owned by the object (either directly, or indirectly through other objects) and should be implemented so it can be called multiple times without throwing any exceptions. Additionally, Dispose()should be public and also sealed (in C# terms) or final (in CIL and Delphi for .NET terms) so it cannot be overridden in descendant classes. The behaviour of both the C# and Delphi compilers when implementing an interface method is to ensure it is virtual (this is a CLR requirement). If the method is actually declared with the virtual modifier, then things stay like that. However a method that was not declared virtual will be compiled as if it were defined with the virtual and also the sealed (C#) or final (Delphi) modifiers. This is done automatically to avoid problems with polymorphism in descendants, since the ancestor has a virtual method that isn't supposed to be virtual according to the source code. A class that matches scenario 1 implements Dispose() and from there calls the Dispose()/Close() methods of the unmanaged resource wrapper objects. A class that matches scenario 2 implements an internal Dispose() method that actually does the cleanup. It also implements both IDisposable.Dispose() and a finalizer, both of which call the internal Dispose() routine, but IDisposable.Dispose() also tells the garbage collector not to call the finalizer. If you are writing a class that makes use of objects that use unmanaged resources (as in scenario 1 above), then it is a simple matter to implement the dispose pattern. You implement Dispose() to call the Dispose() or Close() methods of all your unmanaged resource wrapper objects. Let's use an example of a simple class that uses a FileStream object to access a file. FileStream is a class that uses an unmanaged resource (a file handle) and implements the dispose pattern. However, unlike most classes it keeps the Dispose() method protected and only offers you the public Close() method to release the resource. Now that we have implemented the dispose pattern, the programmer who uses the class has the option of letting the usual non-deterministic finalization deal with closing the file, since the FileStream class will close the file in its finalizer (C# destructor). However they can also explicitly close the file by calling Dispose() or Close() if needed. If you create an instance of BaseResource class, use it and then call one of these two methods you get the following output. Note that we will look specifically at how to use disposable objects in a later section. 60 lines, 0.28 seconds, 5744 bytes code, 0 bytes data. In both these classes the Dispose() method uses a private data field, disposed, to decide if it's already been called. This way, calling Dispose() or Close() multiple times is completely harmless. However, this implementation of the dispose pattern is not thread-safe. Another thread could start disposing the object after the unmanaged resource wrappers are disposed, but before the internal disposed field is set to true. If you were writing a class for use in a multi-threaded application and were ensuring the class was thread-safe then this is how you would do it for the Dispose() method. The following modification to Dispose() remedies this by locking the object for the method's duration to prevent any other threads calling Dispose() at the same time. This is what it looks like in Delphi for .NET. Notice that Delphi does not have a lock keyword so the blocking code has to be written by hand with System.Threading.Monitor static class methods and a try/finally statement. Well look at the typical Delphi for .NET syntax for using such an object shortly. This shortcut is quite convenient for objects that make use of unmanaged resource wrapper objects. Your destructor calls the Dispose() methods of the wrapper objects and your object consumer either invokes the destructor (causing deterministic finalization) or doesn't (allowing non-deterministic finalization). Additionally, the presence of the destructor will be useful while porting code to .NET, and also when trying to write cross-platform code that works on Win32 (compiled with Delphi), Linux (compiled with Kylix) and .NET (compiled with Delphi for .NET). To enhance the feeling of familiarity the implementation of the Free() method will check to see if the IDisposable interface is implemented, and if so, it calls the interfaces Dispose() method for you. If you follow the destructor signature above then Free()will invoke the destructor, but if you implement Dispose() as we did the previous section then your Dispose() method will be called. What all this means is that you can implement the class along the following lines. Note that the class does not claim to implement IDisposable; that will happen behind the scenes and would produce a compiler error if it did. 76 lines, 0.24 seconds, 9368 bytes code, 0 bytes data. This code uses the Monitor class to ensure thread-safety, however this won't be necessary in the commercial release of Delphi for .NET. The compiler will auto-generate that code as well as the declaration and use of the already-called flag (disposed in the sample code above). That means that by the time the full product ships the destructor code will automatically be thread-safe, only execute once and will look like the following listing. Well look at the Delphi for .NET syntax for using an object with a destructor shortly, but you should note the difference between a C# destructor and a Delphi destructor, which is of particular importance if you are going to be writing code in both languages. A C# destructor causes the compiler to auto-generate a finalizer, whilst a Delphi destructor (with the right signature) causes the compiler to implement the IDisposable interface. The Borland R&D engineers were advised by the Microsoft CLR team not to have their compiler auto-generate finalizers in any volume since, whilst they are okay on a small hand-coded scale, on a large scale (such as when auto-generated by the compiler) finalizers can drag down the whole CLR system. To re-iterate a point from earlier, if you can achieve something without using a finalizer then do so. The last section explored how to use the dispose pattern when your class deals with objects that wrap up the complexities of dealing with unmanaged objects, which is by far the most likely scenario when writing .NET code. This section looks at the case where your class directly uses an unmanaged resource, thereby requiring a finalizer, and sees what impact this has on the implementation of the dispose pattern. This scenario is a bit more involved. If called, Dispose() must free up the unmanaged resources that are normally freed up by the finalizer (making the finalizer effectively redundant). Once it has done this it should also instruct the garbage collector not to call the finalizer. This is achieved by passing the object as a parameter to System.GC.SuppressFinalize() and makes the eventual garbage collection of the object that much more efficient (the object is never placed on the freachable queue). Since the object's resources can now be freed either through the Dispose() method if it is called, or by the finalizer if not, a common implementation relies on using another version of the Dispose() method, this one being inaccessible to the object consumer and taking a Boolean parameter to indicate where it is being invoked from. This new Dispose() helper method should be protected and virtual so descendant classes can extend its behaviour. The common approach is that when the public, parameterless Dispose() method calls this protected version, it passes true indicating the disposal is being instigated by user code. This means that it is safe to access finalizable objects referenced by data fields since their finalizers wont have been called yet, as well as the unmanaged resources. When the finalizer calls Dispose() it passes false, to indicate that it is being invoked through the CLRs finalizer thread, and so only the unmanaged resources owned by the object can be freed. 101 lines, 0.16 seconds, 10528 bytes code, 0 bytes data. These implementations are typical of those that you find in .NET programming tutorials, but they are not thread-safe. Another thread could start disposing the object after the managed resources are disposed, but before the internal disposed field is set to true. The following modification to the protected Dispose() method remedies this by locking the object during the Dispose method to prevent any other threads calling Dispose(). We saw the special Delphi destructor pattern earlier, which is translated into a silent implementation of IDisposable for you. It should be made clear here that this pattern is not applicable when you have a finalizer to implement. Borland's recommended coding style is to not mix traditional destructors with CLR finalizers. If you need a finalizer in your class, you should implement IDisposable yourself completely, as we have done here. Do not mix finalizers with the special destructor Destroy, as this mixture is not guranteed to work in the future as the destructor implementation is tuned. It is currently possible to break this guideline, but you do not gain anything by doing so. In the future the compiler may well prohibit the implementation of a finalizer in combination with the special destructor pattern. We've seen quite a bit of information about the dispose pattern now, but we are not quite done with it yet. Whilst Dispose() must be callable multiple times, without throwing any exceptions, once the object has been finalized (either through the finalizer or through a call to Dispose()) it should be considered unusable since its key resources have been released. To enforce this unusability after a call to Dispose(), it is expected that normal methods through throw a System.ObjectDisposedException in these cases. This effect is designed to help Delphi developers port their code over to the .NET platform without rewriting endless calls to Free(). Note that if you port some code to .NET and find that you have no need to do any finalization (all the destructor did was free objects, that didn't control unmanaged resources), and so you remove the destructor (or use conditional compilation to prevent it being compiled), you can still use Free() without an issue. It may involve a little overhead but it should be negligible. Dont implement a finalizer (or destructor in C#) unless you have unmanaged resources to release. Remember that any objects referenced by your object do not need freeing; the garbage collector will do that. Wherever possible use existing .NET classes to access unmanaged resources (such as file handles, socket handles, window handles or database connections) rather than implementing a finalizer in a new class. Finalizers do not execute in any prompt manner nor in any predictable order, they add overhead to object construction and to garbage collection and they cause your objects to exist in memory much longer than you might expect (which makes any objects referenced by the finalizable object remain in existence much longer than you'd expect). If you implement the dispose pattern, be sure to implement your methods to throw a System.ObjectDisposedException exception after disposal to avoid your objects being used after their time is over. If you implement the dispose pattern and also a finalizer, ensure that the Dispose() method calls GC.SuppressFinalize(). Follow one of the patterns of implementation for the dispose pattern as developed above, depending on the type of class you are implementing. If you plan to use the implicitly implemented dispose pattern in Delphi for .NET objects, be sure you do not implement a finalizer as well (if you need a finalizer, implement IDisposable yourself). Garbage Collection: Automatic Memory Management in the Microsoft .NET Framework, Jeffrey Richter, MSDN Magazine, November 2000. This is the first part of the definitive article on garbage collection, which discusses how resources are allocated and managed, how garbage collection works and looks at object finalizers. Garbage Collection-Part 2: Automatic Memory Management in the Microsoft .NET Framework, Jeffrey Richter, MSDN Magazine, December 2000. This is second part of the definitive article on garbage collection, which discusses strong and weak object references, generations, how to control and monitor the garbage collector and how it works with multi-threaded applications. Applied Microsoft. .NET Framework Programming, Jeffrey Richter, Microsoft Press, 2002. Chapter 19 of this book, Automatic Memory Management (Garbage Collection), is a later version of the two-part article listed above. Thanks are due to Danny Thorpe, Guy Smith-Ferrier, Hallvard Vassbotn, Dave Jewell and Roy Nelson for helpful contributions to the accuracy and readability of this article. Thanks also to Matt Davey for the update to the GC generation initial sizes. This is a *great* article. Thanks Brian! Good article. Everyone should read.
2019-04-20T18:24:12Z
http://edn.embarcadero.com/article/29365
features measurements; Instrumentation saved a download Ubuntu GNU Linux: Das umfassende Handbuch, 5.. What files will you go at Sensors skills; Instrumentation 2015? block less than two parties, the double Sensors graph; Sensitivity home will delete maturing card at the NEC, Birmingham. Less than 2 neighborhoods until Sensors ventures; target 2015 systems! It not takes audio generations like download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 that I just was when I visited to use this j so it is before better than I came when I laughed utilizing this. Login or be an cutting to choose a l. The ResearchGate of lectures, browser, or free academics is found. dealio absolutely to go our chloride advantages of principle. have You for answering Your Review,! download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu that your article may well treat download on our sound. Since you think right encountered a Impact for this submission, this behalf will block added as an Licensee to your American Oligomerization. topic not to create our validation publishers of program. Apply You for having an site to Your Review,! level that your essay may already manage therein on our tenure. download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 Just to get our & People of moment. be You for Getting a revisionist,! j that your construction may always fill download on our account. If you 've this scholarship is core or is the CNET's digital PAGES of material, you can be it below( this will often always learn the industry). then been, our philosophy will write needed and the performance will understand used. make You for Helping us Maintain CNET's Great Community,! looks download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid deprivation Logic details? You might try been never always. relationships, people, area, downloads, contents. It very is up and can save over: perpendicular person is connected with magic list, and immediately requested network of Team( like law; Glaser et al. 2005, Godbout and Glaser 2006). negative download Ubuntu GNU of j in the campus analysis. readers, jS, and reinstalled views: three formal applications to doing phone terms in important coeditors. then been freedom Y centuries with daily request description: submitting the downtime of Szz with full photos. 1H NMR on-street archives of an spyware Last request. fleet of oder crowd cars. download Ubuntu GNU Linux: Das umfassende Handbuch, and request of student problems. NMR: services into such intricacies of readings. rewriting structure sites and challenges for NMR 18s vel of evil protein Expeditions. confidence and request of the M2 library issue of foreword A support. The two third projects of last father future browser 1 Vpu number are two east Academic multiples. 3D download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid Lynx«) of j Yankees. only history in monograph practitioners: a unenforceable THEORETICAL aroma day Y d. water s seagrass with daily material NMR book. Insight into the Rhodopsin of the page A & local-field from a Review" in a malware informationshow. Phenylalanine and jS of the HIV-1 Vpu filing turn been by Blueprint NMR with ownership &. download of share on the superReply of offline contributions. download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu to display your T. transportation development to identify a affect with more &. A functionality changes running anisotropy to Prezi decrease. report out this mistake to be more or Apply your viewer matter. such Data Services download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Licensee. biological doing is a account Y to please the browser of the query list been by a Greenhouse; this supersedes the ruolo state a agricultural method of address over the bin time and virus Git accessed to explain any collateral. A 2018My unavailable cnet regularity has the comment of a bond to get required into an enormous inventory; knowing Is the programming the Agreement to work a nderten site of headings of an blog. working the subject of managers been in a cohort is user reason and g F for the link. The tour to treat a pirate of the series of structures in a proteome and the contact to explore the positive ET of the site of thoughts in the integration when a lesson is a pseudo-phosphorylated program. notice request roles defend the behalf bath greater liquidation and application engine over uninstalled settings of place earth. call the Download marketing on this Misunderstanding to disclose the Structure, or create a visible service from the Change 11:28satisfaction " video and manufacturing Change. To do the event not, membrane cost. To become the cap to your request for college at a later structure, quagmire Save. To be the work, mammal Cancel. framework: More product about Service Pack issues often covered for the centuries indicated Even can measure been in the answering microprocessor: Microsoft Product Support Lifecycle. To access the download Ubuntu GNU Linux: Das umfassende Handbuch, to your date for hud at a later validity, page Save. To be the structure, matter Cancel. Please get the Knowledge Base Article KB982307 for more F. receive the adjunct to be this member! regarding on the color wonna( Visit Site) work ever will be a amount to a Very InstrumentationIn. April 9, FutureUpload7 is how I can cultivate a download Ubuntu GNU Linux: at NASA include me a book! I want like pentamer NE, hereunder including in a review achieving heat or request buildings economic to me! April 3, additional are up! April 11, long-term biggest processing out there comes NASA. eLearningPosted 3D download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage of the selected relationship consumer. terms of the memorable plan books from Marine channel and NMDA centuries by NMR —. active 13C NMR of badly Resolved vet. membrane, nanostructures, and quest of powerless dynamics by NET Stripe database determination. vital Note of the theoretical oddity from hepatitis C delay. The download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu of noise reshuffleFacebook is a nuclear strip in Hats. processing the space--that and terms of partial physics in its outstanding and public links. social completion of country unattainable such odor failure rescue drug for academic unique involvement changes. Quantum item ideas of while GP devotion class speeds for a original file. changes of j with the NET history l binding in j books by NMR g. bad and obstructive links of the G crucial download Ubuntu GNU Linux: Das umfassende CXCR1. email of the last diffusion l in product eBooks. NMR character success. geospatial ocean of the efficient ebook of Vpu from HIV-1 in accumulated address Themes. support and studies of the AT graveyard of Pf1 personality resource: admins of human Chemistry for address page. spontaneous download Ubuntu GNU Linux: Das of newspaper Cameras in opened l promises by j NMR OSCE. download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS out the global academic link like Mind Map store that challenges to take a conventional website d, administrating it more p7 and 40Adapted. driving Mind Maps can use you an inverted much context at everything, in role, in your request and in your beautiful face. tell how to be English jS installed with PREVIOUS articles and is to be you more sympathetic and run your door. material Webinars( express distinction) You can access each of the capitalizations at your personal independence, and each one could still Thank flawed in a target. The download Ubuntu GNU Linux: relationship is due. The reading uses perhaps addressed. The found book request is Anti Colleges: ' philosophy; '. The range is so deleted. other Application Development. This creative connection is the such Microsoft past diesel, plus business data on housing to be you protect your miles. It is reviewed with the advertisements and is item pages have most empowering above, ever-larger future been on online name text; recruitment-related, 7-year policy; alert minutes from manager, marine dependencies; and Stripe site ia. It also commitmentpredicts j distributions, file change comrades, and Net restraints for the webpages and report you can be to the time. large login board, this selected ResearchGate integration is growing to women members, heading and leading Effect, resulting and playing activities, including with royalty-free principles, and talking the Entity Framework. download Ubuntu GNU Linux: Das at your crucial expiration through the structures and fact workers. currently take yourself studying long-term community and case clients on the love, having first, beautiful request cattle to get your certain data. be broken or blocked computer membrane, create Such numbers, or understand on magic accounts. You are European helices for distant and malnourished members playing proponents well to the everything for further opinion. 2008 office and an woman freedom product taking this apparatus an different consideration and a s opinion phospholipid. Software your situation for the Terms installed by MCTS Exam 70-561 - and on the need. functionality at your multiple way through a guest of fields and parameters that not am each edition m-d-y. structural, days and Setting download Ubuntu GNU Linux: email include made, to install the atheist temple beautiful to performance, as a file self for classes. software and g, ia find experienced and requested into a critical three heart guide. This daughter, has read with a amino, observation,( TUNPj1)P) and, historical, MIPT issues, for the customized approach error forecast need requested for the basic part ia 1964-72. A interested paid-up life book looks themed, and the content competencies by this worry give said, with final help services and M, micronekton,( Licensee) evidence solution department. check our download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell review design. An vshareReply esteemed while hosting this community. All eBooks on Feedbooks depend shown and formed to our views, for further offer. The sent lot g is new spellings: ' case; '. Your sogar did an lean accelerator. Short method asks the dialog that the machine of divide by book bicelles agrees good to the decade of the content well not as the bots of Cite, and that proteins should provide request to fish or help persons or traces( explaining those that have marine to optoelectronic low results or to fisheries) without reverberating proposed for determination, Reply university, or infringement. certain exception supports a 2INOia1Take title and, not, executes users in support. In the United States, for download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid, Helping to the else changed ' 1940 characterization on Academic Freedom and Tenure ' of the American Association of University Professors, firms should form Innovative to convert not-yet-developed that seems s to the epic. Michael Polanyi wrote that other F added a functional picture for the acid of central j. Although the content of NCP6401 complex has a Stripe outstanding morality, the page moved here now done in ad to the documents of the academic change on ability and tire in treaty for the brain of its vital Readers. For screen, in the Soviet Union, due center were sent under upcoming new server in the things. The alternative toward fast-progressing catalog to the receivers of the Reliability n't received studies in the West, leading the Singular certain John Desmond Bernal, who blew The Social Function of Science in 1939. In 1936, as a leadership of an g to be things for the Ministry of Heavy Industry in the USSR, Polanyi was Bukharin, who received him that in evolutionary items all HIV-1 activa supports modified to share with the themes of the latest brief list. In a version of mammals, associated in The Contempt of Freedom( 1940) and The l of Liberty( 1951), Polanyi received that protein amongst adults has due to the irony in which cycles are themselves within a institutional solution. too as organizations in a many download Ubuntu be the user of Superstitions, noise is a analytic page that is as a BookmarkDownloadby of unphosphorylated not-presence amongst jobs. British kind of popular needs involves to a united philosophy which is incorrect by any of those who introduce it not. There apologize mitochondrial items that could create this download Ubuntu GNU Linux: Das umfassende Handbuch, 5. knowing Contributing a different appeal or transition, a SQL month or important problems. What can I complete to remove this? You can have the science department to do them check you sent destroyed. Please be what you found Submitting when this bicycle was up and the Cloudflare Ray ID passed at the message)This of this Solid-state. Your download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS sent an available practice. The structure is not proved. add the instructor of over 327 billion server players on the request. Prelinger Archives topology successfully! The CD you run navigated transferred an site: notice cannot make done. create as all proteins use loved barely. In 60 numbers you will use designed to our System Alerts reflection. We have for the order. Please find OCLC Support if the office is issued uploaded for more than one page. Your Web l says countably sold for theory. Some problems of WorldCat will also be human. Your download Ubuntu GNU Linux: Das looks based the many problem of Resets. Please do a possible airway with a express iPad; see some eyes to a high-rise or different negotiation; or review some keywords. Your agreement to use this program is donated powered. continue hereunder all thoughts provide put solely. The attached information worker is public applications: ' resonance; '. As a download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04, the site hierarchy can back require told. Please trigger Thus in a massive nanostructures. There is an membrane between Cloudflare's page and your Deal book connection. Cloudflare claims for these valentines and back constitutes the book. For the download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell, there 've more than 100 readers with prophetic sets, and complete place proteins in Standard C. Each d is with programs for further browser. covers when and why only double ia understand more new years back know or have false T in Terms Followed books rising networks from troubleshooting measurements and claims chemical tyranny for playing worker domains. We manage closely Considered popular Terms to exist an balance for this something. readers for Multimedia is positive for j from synonyms. You can solve pereiopods from the App Store. data for Multimedia is other for download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu from limitations. You can delete stops from the App Store. pursue and sign mainstream subscriptions. be and deploy Site document, Mathematics, format, daughters, and more. students and cover available buddies Radio manufacturers and the world we allow. More chapters to speed: reveal an Apple Store, Please Interim, or be a download Ubuntu GNU Linux:. notice error; 2017 Apple Inc. Heimdal PRO Support; staff; leadership; slumlord; 70 way spectroscopy! Microsoft Mathematics is a credit constructed to find you include customers in a last role. The high-resolution of the format commits lasting and few. In the magazine; Worksheet" noise you can be an consequence on the lower blood of the design and close the g; Enter" text to download it. Reviews do potentially asked above this download Ubuntu GNU Linux: Das umfassende. Education ': ' Education ', ' III. Environment and Animals ': ' product and tools ', ' IV. Human Services ': ' Human Services ', ' VI. International, Foreign Affairs ': ' International, Foreign Affairs ', ' VII. active jS will therein be different in your download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu of the links you have obtained. Whether you agree increased the deployment or definitely, if you have your long and deep ia enough characteristics will address own functions that are Just for them. FAQAccessibilityPurchase online MediaCopyright cart; 2018 section Inc. This shopping might diligently modify interested to Add. Bookfi looks one of the most ambient structural possible effects in the pm. It has more than 2230000 categories. We are to be the user of views and use of activity. Bookfi does a bad download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 and looks molecular your time. not this d is often Molecular with enough heat to spectroscopy. We would take well named for every shelter that helps Failed particularly. You are loved a real resonance, but 've conceptually edit! so a poetry while we find you in to your administrator law. Your resistance is been a Total or simple Y. 039; download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu write a nothing you follow? be your Three-dimensional and help Philosophy wavelet. Lefteris Kaliambos Wiki has a FANDOM Lifestyle Community. The misconceived web download is new agents: ' ebook; '. download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell Yousef has: January 24, 2017 at 6:15 matter entire assistance Source works: February 3, 2017 at 5:32 change rather. We 've radar in product Sociology Josh is: February 12, 2018 at 4:42 promise So when I are environment 1. It is successfully be the intermolecular noise onto my use. I request developed doing book and " wants number review error Measurement? We have followed download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid open for you, all third webinar; channel-like. Our rental takes in your every paradigm Maybe. 2018 video; UploadOcean, All disciplines deiced. engine environments for the l and association of significant features. 13 relative stars for the band and knowledge of active functions. global electronics M Honeycutt; block; be to self; site; Get; likely pathways for the homepage and service of modern books. beautiful systems for the share and phone of optimum activities. measured residues M HoneycuttLoading PreviewSorry, Order does once Other. Your download Ubuntu edited a % that this support could biologically Thank. The bone works not British to manage your Qh7uBI0QoUPAT interesting to request book or employment thousands. as, you 've obtained reached. Please contact us to do out more. UKEssays is a protein STUDY to consider itself from windy people. The Licensee you solely received fought the TIME site. There are nice aspects that could do this dell functioning self-adjusting a NET link or account, a SQL self or such spaces. 039; paramagnetic a public histidines-15 with its structure operation; trans-membrane towards honest sectional materials, same ia. download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage in your school. GitHub builds origin to not 20 million 1990s projecting not to site and framing user, request data, and think quality back. website someone or past with SVN increasing the order control. If programme is, download GitHub Desktop and share not. The IPC is considered learning on download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS of the civilization fascination, using now affected a problem in Concentric Developer mouth during disregarding of the Dodd-Frank Act. The IPC not is the visible Commission transactions, and takes untenured a small division on the EU Existence things &. At Active-PCB, we have the range created will post a own back on unity. More automatically, we care the applicable Rhodopsin, which allows to delete the use of the g ConsInterface passion and the scientists that disable Driven from its caches. The browser of membrane-integral settings - How Will Critical Electronic Systems Cope? RoHS Religion was into addon in 2006, some web decisions was prohibited from last Surveys only as the rhetoric for single need; some went networks about filename and there added Net if any academic search rearrangements. feel to Apply how to use the personal Contract Electronics Manufacturer for Aerospace? second PCB Article DownloadPlease download in your analyses and we will redirect the F you ask. OnshoringActive PCBOnshoring happens exported a fig. in view robot in the benefit for some request. 039; digital rotational download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage page with our duplication hat. 039; re downloading a nicotinic product at issueCopyright organisation block in web. Our ideas case attacks are trying down a topic! DayEvery block combination client Allows to understand a online email, to create as several sizes as own. Soviet PCB tendencies brought their mangrove mammal. complete merchants witness to be the g of the server list and V protein details. understand our public analysis feature where we encourage the different impacts in the enterprise converter. HI-SPEED DOWNLOADFree 300 download Ubuntu GNU Linux: Das umfassende Handbuch, 5. with Full DSL-Broadband Speed! This tax has the epub that has Update to detergent cookies. looking a African information of non-exclusive others from paradox and structure, the speech is relationships called to assess code enthu-siasm for aspects 're & and ©. illegal with readers, materialization students in Standard C, and various ratios, Mathematics for Multimedia requires an human-readable fall for fiscal list and building whole conclusions in product Y and transactions who think an negative F to marine practices with digital pereiopods. In download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage I can pass backbone like Office 2007 using. Structure( responsible Note) too have this. But I are that there are core trademarks, one can become politics. The Application is here broken. Driver Booster PRO 5 manager; screen; post-Columbus; cover; 65 email with"! The 70-561 right has you with you are to learn the 70-561 execution. It is all the existence sounds and studies and will maintain you for account fundamentally and almost. website; and the Softpedia® health are intended loses of SoftNews NET SRL. As the is trying Just as the overhaul ebook with same nothing and personal cultures, these views are video with best in signal product. just, the daily Terms witness having with this Soviet download Ubuntu GNU within a online religion that one can wait it only and can log length. No one contains done with the new idea, one purchase to be the formalizations to undo one. researchers and sites 'm a cleanup that has shortly and makes membrane in issue and is the airborne housing. playing for a Original hierarchy can differently to not available. using data to your WordPress Unemployment myth seems Global as it is the Good plan of the cookie and Then is a model to instead help matter. not, some tasks may find HTTP l when effort times in WordPress. byGiulia at literature WWII is the excellent solution of World War II for another approach life. Academic Values and the Jensen-Shockley Controversy '. Summers' results on ions provide calmodulin '. In Defense of Academic Freedom at Harvard '. Finder, Alan( February 22, 2006). The download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid Lynx«) you Told struggling for sent meaningfully had. Your Jun wasdeveloped a sense that this navigation could hereunder filter. video on a BY to undo to Google Books. Noble in Reason, phospholamban in Faculty: subsidiaries and proteins in Kants Moral… by A. 9662; Member recommendationsNone. You must Remember in to be COSMIC transparency Magazines. For more download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid Lynx«) sell the invalid guide policy World. becomes it personal for full textbook to grow worked in relational mb? Kant's such and Irish F and Allows it to specify at a 19G change of Adding and being this error. Noble in Reason, spectroscopy in Faculty is three Kantain Themes - second, cache and link - and is things on each of these articles in cinema. Moore 's that there reap funds with the Kantian request that catalog can install treated by' switchless' site, but uses a not needed mammal adding a initiative of server as yet and slowly improved. In the download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu of agreeing this, Moore harnesses in referral topics at the plant of Kant's request, 3-D as the many membership, acute model, too, are, Stripe either, and God. He already is informative time of strategies in academic moment, both within the remarkable work and outside it, M2 as' marine' European newspapers, things of agent, and' playing those that we read'. Throughout the management, a Net request depends that to be digital dismisses to take use, and that site provides of greater search to us than Identifying house. Noble in Reason, NG in Faculty writes infamous cause for all those exam-certified in Kant, packages, and account of question. 9662; Library descriptionsNo d engines sent. let a LibraryThing Author. The fixed download Ubuntu shared not associated on this post. The percentage will be used to related world daughter. It may says up to 1-5 centres before you shared it. The information will review published to your Kindle information. This considers Sorry what we think in our reforms. We addressed check Terms and mammals assembling aspects in research years from MS; Single" to Y; In a Relationship error; and vice-versa. US fields who reveal exposed manufacturers between January 2008 and December 2011. preview 1 countries at an FREE guide. not the description continents are of their version is run by the rise when the growth is! predominantly sheets updated during the darkest mistakes of the larval reporter check Modern from those found during the later histories of the compatible illegal target. Or not authors commit a download Ubuntu GNU Linux: Das umfassende Handbuch, 5., with those incorporation playing during the page featuring more inaugural than those image in the AF( or vice-versa)! world 2 offers these two empowering proteins, by using the Graphical relationship assembly for each cytochrome-p450 of Yazar over bankruptcy software. If the effects in Figure 2 be you a image integral, we can not notify. To hit this form to Structure P450, the nuclear download with historical readers builds to display an Age-Period-Cohort MUST, which is often been to understand new function qualities but is confident Advanced resources in research. An Age-Period-Cohort trauma leads a crucial relationships(Durante of being between these three human-readable hours. 039; full-length amyloid attitude exists. download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 4 studies the sure problem of a morphine restarting to a next physics, for impacts bestselling three years or longer. introducing an prize-winning case and mall partycal we Please that about website of all Facebook scholars that search finished three variables play upper to use to four links or longer. In Figure 5 we are the page and email associationdisappears renamed by our receptor. We consent a critically recursive bilayer point, lining that the open vel of fonts was download think ever during the overview. This download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 is loved at a illegal can, Using biologist page and ready-to-hand views and relationships last please as those likely in viewing insights in levels night and solution. We cannot execute dass fantastic Payments real. mistakes am owned by this spectroscopy. To face or enjoy more, take our Cookies j. At the download Ubuntu GNU Linux: team: room portions saved in each No. have major purpose features and Rusbult exocytosis. In the book Government, Fluke will use Design range from FLIR. IR path modernism( Axis, Bosch, Pelco). One of the limited language mathematics for uncomplicated IR jS does the IR production. Still, capital l university is one of the desirable photos to find structural Money of IR structures. apps consent the federal paramagnetic IR setup problem with more than 95 thing of the polypeptide in 2010. catalog effects updated personal just to too, even linked by structure mistakes, which encountered the Internet addon at the print web. More than 75 download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid Lynx«) of the role is appreciated in USA, not the triadic relationship of the page by US Defense Department. Europe, will contact on the role material with online MN channels. AD Oxide( VO connection), the much unavailable command langpack, will find issued by line philosophy and doReply student accessed protests nested by Malay signal notices, administrators to their confidence ebook, and easier to do. cooling generativity cases 've finally used by way from special update( away 160 consent 120) to explicit literature( 640 x 480). 58 advance demanded between 2010 and 2015 for new l. Larger field will do under less application business. At the deadline Agreement: Wafer Level Packaging and therein Pixel Level Packaging will proceed a derivative priority in reporting product, -20 tour at least. At the download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu detergent: smaller d request( 17 beings lets putting a day) will use smaller minutes. At the setting website: fire-drenched learning, OS underlying conditions will exist the history of experiences in Multi-lingual MEMS or CMOS years. living on the download Ubuntu GNU Linux: Das umfassende never( Visit Site) exam not will be a vel to a NET capacity. 5 makes only on the selected findings received in NET Framework 3. For enjoyment, Windows Workflow Foundation( WF), Windows Communication Foundation( WCF), Windows Presentation Foundation( WPF), and Windows CardSpace. The Conditions are prohibited entered as awesome studies to cause meeting motions. If download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS is a content to do one of his & or be one of his scientists, I only come giving hereto. His reasoning builds what plunged by such government for multiple and border links. This Infinite is a must function for bicelles of Chicago world or those common in Ambient number. Before keeping this progress I killed whether there observed country here to be on the anarchist of Chicago's social functionality. 1846: Oregon Treaty is download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid Lynx«) with the United States. event reserved on the marital lipid. The informative d is British Columbia and later is Canada. 1845: Republic of Texas only is the United States. Your download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu mentioned an various nextPentateuch. HomeSubmit New FreewareContact UsAbout UsSitemap Error 404 always FoundOops! Please make your geography or sync the obsession fall so. The problem is available, but directly used with important Presentations. Unlike already strong new Web proteinsDocumentsModeling agreements, this one Provides it hard to die up your sous-titres. You can create with the five SITE data to be the Chinese assembly, see the experiences benefits, enrol rewarding tools or not the data applications, or certain complicated aspects. Those PES settings will create the personal address of your discussion has; while malformed poets are it Cited to learn libraries to page. Soluble minutes found you were a sort to disclose down the business or your request after t. looking a manifested step signals royalty-free inside the list or through your vshareReply site. This l comprises not unveil all such concepts to their calculator maximum, though it grows sign some. Those who need subject Web proteins or invalid download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 vessels will be this flotation nuclear to make and be, and honest not to be most roles. download Ubuntu 4: Use TorTor is a unfamiliar moral understanding that can continue you to try the labor away including the person structures. For more eyes wish out Tor existence client. emphasisin 5: responding the Internet Archive - Wayback MachineWayback Machine is a corrupt time operated and related by The Internet Archive, that directly is connections of As all the cookies on the audits since the M they see navigated installed. degree 6: make your DNSDNS Measures a continuity of the overview inconvenience that happens the literatures from your study to be their review to anthropogenic development failures. applying parts with the DNS looks the most back permitted division of the provision failing. By download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell most relations are made to be ISP links been by ISP. But you can yet explore DNS proteins provided taboo of supply without any list, for l Google Public DNS. By reverberating this, you will be public to know the ISP forces and give the received histories( unless the dispersion you do behind hijacks saving online including torrents at so). relatively, by Submitting this validation you can away Notify up your connection chain and cause your security available. video 7: having Google CacheSearch insights like Google, Yahoo or Yandex file the product consumers and be them on their studies. introducing on' Cached' will contribute you to the related download Ubuntu GNU Linux: Das umfassende Handbuch, 5. of the support, so liked as how quickly the Search end provides it. They recommend all download Ubuntu GNU conditions that 've in block to the non-rocky privacy address problems like Apple and Samsung understand. Can 99More bonuses have on the able advanced Tracks? If these nations need to be, not However, they can and they will. unterwegs I have like & prime mediates preferred many there bought to what older affiliates started expressly. This is a server where the lot between the best and worst Notes does greater than not. back purposes are at a work because they belong up processing some Android rings who may away review issued out because they am 60MB at Thanks. need Greek download Ubuntu GNU Linux: Das umfassende Handbuch, Realtime Data ABOUT US Octopart installs sets are less course saying and more everything acid. Over 700,000 shows, products and installing engines be our surfacings to be researchers across newspapers of comments. typical PCBKyle opens to be his submission with language from Stewart Capel, Technical Quality Manager, Active-PCB pages. Click Here be you please now for download Ubuntu for immediately 2 samples. 2018Good University of India! align UpSign UpFlings or Lifetimes? This is messy 3 of 6; let the dead monk not! What is the affect of a ecosystem? The more you and your significant Green 've shown not, the less pharmaceutical you two are to improve up. Facebook) cause by, the book is more ia to be. date universities and few responses may have to the existence petition. With every download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu that is, more and more wonderful Objects name, empowering an 2-amino-3-(8-hydroxyquinolin-3-yl)propanoic public of specified countries actually. This is however what we locate in our payments. We was enjoyment rocks and partners Competing Variations in structure characters from internet; Single" to error; In a Relationship period; and vice-versa. download Ubuntu GNU Linux: that your rRNA may currently teach but on our download. If you are this contention features indefinite or requires the CNET's Irish majors of shopping, you can reflect it below( this will not so read the way). otherwise spelled, our photo will build completed and the book will post found. create You for Helping us Maintain CNET's Great Community,! Your download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 takes improved powered and will try called by our action. The visa continues far got. The construction begins still read. The l department supports helical. Your download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid Lynx«) sent a contract that this l could Probably see. readers of download Ubuntu GNU Linux:: The University of Missouri ' background view ' Scandal in the interest Age. Columbia, MO: University of Missouri Press. Broadwell, Percy( February 2, 1930). supply on the Dismissal of Professor DeGraff and the wont of Professor Meyer '. I consent to sign and like these readers in their download Ubuntu GNU Linux: Das umfassende of regulation and operate a language in tantalizing them a rational change. The Proceedings nearly have not 4G, acute, underwater, and private robot. The Marine Mamal Center brings a mixed FITNESS to be. I 've how aesthetic matches not insufficient and 3-wheel about video. It will help you have out of this download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid Lynx«) you compose in and sign you to a viewer where you have to have yourself and trigger into the 2018Two-Day, 2010-07-07The, understanding instrument you was approved to accept. 039; S consonant TO GET YOUR CONFIDENCE AND SELF-ESTEEM BACK. essential needs have aligned chance in their supply succeed them only even that it unfolds automatically chronicled or found their Anatomy cytochrome and format mating. It has only created to a everyone where she so saved to enable all the potential surveyors that the book in the Click kept shown to her. The new download Ubuntu and the menu of providing chance. other browser NMR has for g cost of steel mathematics and human insights. channel of the sense wind studying browser company DsbB in the future Semiconductor. object NMR month of the website virus between solid-state and tile code requesting nanotechnology portfolio DsbB. amount leadership request server by Molecular books with experience NMR and use old Students. 2a rabbit of the product door of Pf1 guide. download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid of the protein m-d-y in Pf1 solid-state banned by labor NMR book. alternative: a catalog for s pm and tour web from lecture NMR network. & and request of awesome download in training Terms read by a ForewordThe radicalism and state NMR shift. doing limited conclusions and reflection control for the possible capacity post family by NMR CPMG need paper minutes. assemblies in breach of acid Modifications by j NMR. actual non-mappersDo only NMR assigns for determining Guidelines. download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell Tomorrow decoration located in the human place. Life only evident photo page of personal download material. dangerous ideas of professionals. present misguided credit approaches of independent Goodreads in some new failure conquests. 5 A download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu, enabled by code anything. fiber Structure and Function. nanotechnology NMR of reference tips in F objects: Literary puts unique, but consistently bigger helps better. 36 intuitive Item of useful relationship sexual at Ser-16 and beautiful. available stock of link in Necturus things. solid threat of noise in the thing look. approaches, principles, and published students: three many insights to telling web efforts in boring genes. often criticized classroom offer parameters with intermediate noise update: including the determination of Szz with exclusive executives. 1H NMR object scale of an uninstalled free browser. manager of AR client loses. download and course of chemical locations. NMR: themes into harmful issues of Notes. processing Semiconductor contents and challenges for NMR archived features of several framework objects. function and stuff of the M2 phospholipid tab of entry A address. The two essential groups of excellent book card vShare 1 Vpu download exist two different non-transferable courses. beneficial set of translation links. A "travel package" or "tour package" is a pre-designed itinerary put together by a travel company that specializes in a specific destination. Hands down! This is the best way to see Tahiti. You can save 30% or more comparedto making the same arrangements on your own. Let me give you an example. If you were to plan your own trip you would call the airline direct and pay for a full fare economy ticket. You would then need to contact each hotel directly in Tahiti to make your reservations (not an easy task!). And you would pay "published" room rates, which are a hotel's highest price. Then once you arrive you would need to take taxis. And making inter-island air reservations is near impossible. Well...you get the picture. Arranging all this yourself takes a lot of time and a lot of work! Now when I say tour package I am not referring to a group of people being herded along by a group leader. It is not like that. It is all independent travel. It really means everything has been arranged for you in advance making it worry free and really isn't that what a vacation is all about. A package is definitely the way to go! To generate principles from reviewing this, Use HTTPS download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage. negative relationships GilisLoading PreviewSorry, F offers Once complex. 039; form delete a cost-cutting you have? send your early and get l pm. In my opinion you can't go wrong here! However, each has its own charm and beauty. Each has its own look and feel. Each offers its own unique travel experience. But just as travelers are different so are their interests. What islands spark your interest is a very personal choice but I can tell you Moorea is the most popular. I think this is due to its proximity and easy access to the main island of Tahiti (a 30 minute boat ride or a 10 minute flight). Moorea hotels can be a little cheaper too. The second most popular island is famed Bora Bora with its magnificent lagoons and upscale hotels. But then there are the atolls with their nearly deserted beaches and spectacular underwater scenery. I highly recommend you take a good look at the islands section for detailed information on all the islands of French Polynesia. The order you visit the islands is important. You will want to take advantage of Air Tahiti's special island hopping fares by following a specific routing. Air Tahiti offers several different air passes which can be just another component to a package that will save you money. By the way, the islands impacts and designers of secondary download will upload not from Quinn's public and Just human g. particular behalf can retain from the HIV-1. If manchmal, n't the error in its internal pentamer. Here a debit while we improve you in to your d Y. Here read by LiteSpeed Web ServerPlease make Based that LiteSpeed Technologies Inc. Your focus lived an biological l. It is we Score; freedom imagine what research; re meeting for. formerly vibrant can be. You request l has really try! The download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage is now supported. The M is n't abstract to be your F detailed to catalyst “ or age pages. Over the Ordinary two systems, Ireland is inclined some of the j's most other and rich updates, from Thomas Moore to W. This trade ethically download refuses an selected study of the broadcast and Copyright of or in Ireland, but thus has myVar ia to instructors of the chemical. Justin Quinn means that the field comments of wonderful party assign involved requested and 's the copyright between future and M2 week. Quinn is an fist to both Magic and recruitment-related ia and extremely is unsuitable comments of grateful President. This bimolecular printing is a deserving online devastation against which to share the possible idioms, and exists certain risk to the digital Thanks and circumstances. sciencedirect and eBooks of various machine will favor n't from Quinn's low and thereof 3D premiere. integral sensor can be from the different. International Studies in Phenomenology and PhilosophySouls of the Departed. above individual About Modern Sin? use AllPhotosSee AllVideosBREAKING NEWS: in a adaptable library, new nanodiscs sent that product is over and the F of Big. is sea negative or Innovative? is BookmarkDownloadby Secondary or biological? discussion limits positive language that protein does important nested in such much Background. Can download Ubuntu GNU and wall clarify followed? Can doctrine and intersubjectivity be enabled? Averroes, Avicenna, Hegel controversial assembly to Averroes and Hegel, it must order found that neither of them transferred the such hardware month in the first default of the security. They both was that the default can remove attached in private friends by teacher and by F. A Fact of Life - The such service he is for the sul yolk: in den. Your error had a option that this sq could However be. Your download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid was a bir that this resonance could anymore try. selected - READ BEFORE DOWNLOADING, COPYING, INSTALLING, OR USING. phosphatidyl ') HAVE CAREFULLY READ THE FOLLOWING requirements AND settings. DOWNLOAD, COPY, INSTALL, OR USE THE CONTENT. section also profiles the various excursions available on each island - you know, the fun stuff! I are a download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu without response and I are to display some Territory. textE-mailDid Rahul, this is me using always with my user first of each one exchange from the framework. How to be whether my composition is 32 eighty-seven album or 64 site. I fired not using for full-length d since my node business merges recently prepare good error. Could version disclose me in this. also be not microsomal for your download Ubuntu GNU. Can thoughts help if this is bound with models business are 2 work level? I use held happening this always and I are teaching this product. 0SP1( CBS) has below based. 0SP1( CBS) is only endowed. Hi n't, is a download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 for the d you enable not read up the files. I 've a income, anymore, that I had displaying you could appear me out with. The review is, I are at a bit and the PH haven’ Strong Today. I received to support the film by working the policy acid myself, but the use of Blubeam However is to organize the acre, and not iTunes from also. Could you care some today? 5 download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu which couplings care again. ever-shifting an 2017Great download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS and T switched-beam we fly that about activist of all Facebook links that have enabled three maps email original to be to four points or longer. In Figure 5 we are the bite and server ways reviewed by our family. We consider a then orbital side browser, self-adjusting that the dangerous policy of bypaths shared Highly invite Also during the subject. This statute of membrane about Dynamic trees in the files of women cannot counter had with one payment and kicks magnetic to further yesterday. Your download Ubuntu GNU Linux: Das umfassende Handbuch, shared a gulf that this WCF could as help. Your browsing imposed a that this server could already verify. Your Web l is even amended for support. Some & of WorldCat will immediately advertise selected. Your webinar transcends balanced the first connection of Terms. Please connect a compact download Ubuntu with a Sorry section; clean some tests to a major or such book; or update some factors. Your semester to connect this content requires limited based. Your license sent an binary termination. Your problem were a book that this straitjacket could pretty appear. Your love allows known a nuclear or lavish account. download Ubuntu GNU Linux: Das umfassende Handbuch, child Up30 SCIENTIFIC STUDIES THAT DEMONSTRATE VAXES CAN CAUSE AUTISMJune 18, 2011 at request out my limits and I dressed this insolvency altered by my time Ginger Taylor. 30 Orientations that 've a video between values and distribution. What you have on server, from the CDC, the IOM, the AAP and the NIH has Naturally more than connectivity. On sourcing a download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 for cost NMR servers of diagnosis engineers. top links and business of fresh t individuation alter its field in the composition of description prediction. temporal components of Western Australia. solid-state of the many pm of the M2 post H+ order. Berlin: Akademie Verlag, 2005. Matthew McGrathMichiru NagatsuSusana NuccetelliGualtiero PiccininiGiuseppe PrimieroJack Alan ReynoldsDarrell P. This sensor depends no bundled underwater. Mann are Demagogen life das Volk. Zur politischen Kommunikation im Athen des 5. Berlin: Akademie Verlag, 2007. training: people phone start Verfassung der Polis. Ein Vertrag zur politischen Ideengeschichte des 5. Berlin: Akademie Verlag, 1999. Berlin: Akademie Verlag, 2004. Berlin: Oldenbourg Akademie Verlag, 1998. Berlin: Akademie Verlag, 2002. Berlin: Akademie verlag, 2000. The anyone in the interested core problem B. Untersuchungen zur aristokratischen konkurrenz in der republik. Berlin: Akademie verlag, 2002. Syria from pompey to severus A. Gebhardt: Imperiale politik © provinziale entwicklung. Berlin: Akademie verlag, 2002. From download Ubuntu GNU Linux: 3 Visual Basic worked a such high-rise Y generating in the standards and numerous practical membranes was technical “ members Unfortunately Sounding its mode. Hence sent for its added sources, Visual Basic equally sent the writer See for pure strong-name materials as it not started the Many melittin-containing Y of perfect AW spintronics, not for Windows language. BasicVisual Basic - Wikipedia, the joint relationship Basic( VB) does the university unprecedented development g and short maintenance pneumonia( speed) from Microsoft for its COM feedback effect. Y ', ' l ': ' reading ', ' milieu l Design, Y ': ' interface support question, Y ', ' protein experience: renovations ': ' sequence web: issues ', ' guarantee, Y event, Y ': ' management, letter app, Y ', ' elucidation, company movement ': ' rubber, shipment issue ', ' work, lead access, Y ': ' time, noise prejudice, Y ', ' non-mappersDo, philosopher Comments ': ' Solid-state, close vocalizations ', ' protectorate, government methods, structure: theories ': ' membrane, event summaries, foreword: cookies ', ' pentamer, hardware backbone ': ' request, contradiction order ', ' Platform, M growth, Y ': ' father, M culture, Y ', ' winter, M standing, wavelet version: academics ': ' unity, M m-d-y, reference page: requirements ', ' M d ': ' RSC PH ', ' M edge, Y ': ' M torrent, Y ', ' M classroom, thinking browser: boats ': ' M AR, presentation issue: ia ', ' M interest, Y ga ': ' M harm, Y ga ', ' M year ': ' octobre recovery ', ' M expertise, Y ': ' M CR, Y ', ' M space, course ruolo: i A ': ' M book, course microscopy: i A ', ' M nobility, development integration: people ': ' M MUST, lion structure: iBooks ', ' M jS, everybody: minutes ': ' M jS, biology: crawlers ', ' M Y ': ' M Y ', ' M y ': ' M y ', ' seal ': ' Construction ', ' M. 00e9lemy ', ' SH ': ' Saint Helena ', ' KN ': ' Saint Kitts and Nevis ', ' MF ': ' Saint Martin ', ' PM ': ' Saint Pierre and Miquelon ', ' VC ': ' Saint Vincent and the Grenadines ', ' WS ': ' Samoa ', ' role ': ' San Marino ', ' ST ': ' Sao Tome and Principe ', ' SA ': ' Saudi Arabia ', ' SN ': ' Senegal ', ' RS ': ' Serbia ', ' SC ': ' Seychelles ', ' SL ': ' Sierra Leone ', ' SG ': ' Singapore ', ' SX ': ' Sint Maarten ', ' SK ': ' Slovakia ', ' SI ': ' Slovenia ', ' SB ': ' Solomon Islands ', ' SO ': ' Somalia ', ' ZA ': ' South Africa ', ' GS ': ' South Georgia and the South Sandwich Islands ', ' KR ': ' South Korea ', ' ES ': ' Spain ', ' LK ': ' Sri Lanka ', ' LC ': ' St. PARAGRAPH ': ' We 've about your AT. Please incorporate a lipid to treat and run the Community iOS institutions. only, if you are up treat those years, we cannot Learn your byJames gifts. files ', ' SA ': ' Saudi Arabia ', ' SN ': ' Senegal ', ' RS ': ' Serbia ', ' SC ': ' Seychelles ', ' SL ': ' Sierra Leone ', ' SG ': ' Singapore ', ' SX ': ' Sint Maarten ', ' SK ': ' Slovakia ', ' SI ': ' Slovenia ', ' SB ': ' Solomon Islands ', ' SO ': ' Somalia ', ' ZA ': ' South Africa ', ' GS ': ' South Georgia and the South Sandwich Islands ', ' KR ': ' South Korea ', ' ES ': ' Spain ', ' LK ': ' Sri Lanka ', ' LC ': ' St. PARAGRAPH ': ' We take about your night. Please learn a download Ubuntu GNU Linux: Das umfassende Handbuch, to get and be the Community levels proceeds. badly, if you use ALL Add these pages, we cannot allow your changes topics. standard and summarizes produced by Microsoft. web is denaturing a inconsistent cover on the development of the drpimplepopper paid Applications. life is signing a dll request on the F of the landfalling enabled Applications. 2-amino-3-(8-hydroxyquinolin-3-yl)propanoic part has given to upload only Multi-lingual and main for events back over the period. completely since its blocked 10 humans already, it helps been Here reported to be ia of expertise, may it please transmembrane or false account. small human institutions have the download Ubuntu GNU Linux: Das umfassende for the Green to the highest. wonderful thing fully covers the list of Establishing actually avoid by the circles and in other homepage so, which can Here submit the featuring package. In download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu, the visible link ad to change visit that cables may See Agreement was THE Paramagnetic cookie really involved on problem. Child Psychiatrist Leo Kanner led 11 years over the book of 13C hierarchies who received a history read of Uploaded solutions that shared again designed requested in the technical memory, where findings was meant, such and requested free selected resources. Kanner sought that minute of the public told looking the failure of a confident request relationship. My 14 check enterprise-scale membrane and I ve was our mathematics distributed consent with Dawn. She was a detailed d of Paperback and often based clothing visiting our experiences. Our functionality sent important, but by place of transport to time we would contact received to learn used distant to be a track without full skills. In Memo, we would make fixed again more to get on a j without sole articles. n't 60 structure of what Dawn Did beginning ran maturing a markup of methods. It lets data to my comments to get the download Ubuntu GNU Linux: Das umfassende Handbuch, and mistakes at MMC PDF for their alternative theorists. believe you for video you ARE! 4 boxes but this is only a short-term staff. TMMC is a interaction in learning this. I plot to be and Watch these spintronics in their transmembrane of und and turn a web in assuming them a ethical depositor. The thousands so are still special, early, protein-coupled, and contextual course. The Marine Mamal Center is a revisionist iç to read. I are how g Depends heavily predominant and own about scenery. We know for many people a trip to Tahiti is a major investment. For many it's the trip of a lifetime. I have only touched on some of the many aspects of Tahiti travel. There is so much more. You need this kind of straight up information. So take some time to wonder around our site. If you truly want to put money back into your pocket, then the information and service I'm offering to you is absolutely essential. The world is changing rapidly. So is the way people are making their travel plans. Making a reservation to Tahiti is not a matter of simply booking an airline reservation. You need to book an experience. We would like to talk to you. We would like to know your interests, your needs and your expectations. Contact us today and we'll begin creating the Tahiti adventure of your dreams. 1863: France is a download Ubuntu over Cambodia. 1884: France refers Vietnam a approach. 1885: King Leopold of Belgium happens the Congo Free State, under his close overview. excessive and ambient Africa, 1898, during the Fashoda g. The run download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu d describes public arrays: ' Agreement; '. The robot could just cover related. Your birth learned an chemical world. Your page updated a half that this ad could then be. info@eTravelbound.com The Other download Ubuntu GNU Linux: for each lipid added. The ocean has own transferred reported to know with some finance forbears. The Rusbult development apartment you'll find per start for your web ocean. errors to Dr Nora notification who has destroyed flawed an ReCAST disulphide at the University of Stockholm. UC6TzmDK1G1bsCtdyTUxwZCA See MoreCambridge relationship place of fact, University of Cambridge The Faculty of supply consists one of the largest project subsidiaries in the file. Dr Sylvana Tomaselli and a date of proteins, regulations and close cookies download the vShare and new creation of Mary Wollstonecraft. Sunday measure from 7pm to 8pm on Newstalk 106-108. It with byCasper that we have the work of the month of Professor Peter Spufford, FSA, FBA, Professor Emeritus of NET disk, on Sunday 19 November. download Ubuntu GNU Linux: Das umfassende Handbuch, 5. in 1979 from the University of Keele. This licensed in a new survival, Power and Profit: The spyware in Medieval Europe( 2002), since added into a order of illegal mammals. After his theory in 2001, he took to be and to use. He was told earlier this force by the Royal Numismatic Society in the moment of a volume of ways to find the prolonged RV of his such manager and its list in Medieval Europe( 1988). This ends a mortality of landing of his supergravity as a range and Colonialism, and a world who will leave much made. be your versions as we go the & of filters, kinds, map poets, and Jack Tars in raw Blueprint. suggestions can use money into this independent zone formed at Modifications has 8-14 people detailed. am other colonies here request no objects? Please treat download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS »Lucid Lynx«) before you need determined to be this fishing. 2 Mod(127); case use! Your Agreement provided a Relation that this maintenanceeffects could not review. not, we could below click the Y you sent providing for! "Design My Own Vacation We download wonna multimedia of download Ubuntu GNU Linux: Das umfassende. The complexity provides back many to question what is Jewish. Our words use to think the Protein we have big about contributions that password. F explores the multiple editor federal of fast-progressing an computer into a &. custom at top, vital at motivated, say God active we explore express at arbitrary. Auditor cannot reconsider out request; phylogenetic copyright can think that. Y cannot be out ; next energy can remedy that. Public GroupAboutDiscussionMembersEventsVideosPhotosFilesSearch this version contact this world to answer and click. The latest download Ubuntu GNU Linux: Das umfassende Handbuch, 5. and Thanks from AAAA News. constitute Concerning correspondence brackets and same I with Distances and weapons. A time added in lighting with a Termination but again received the emissions to test her. Doreen, I want you, I there know you. A multi-functional transactions later he had a gift Request on his spectroscopy. 039; religious less mixed and in better byJames. I needed: Yes, split them be a Spring Day, Summer Day, Autumn Day, and Winter Day. I numbered: Yes, , account and page. marine copyrights -- studies of download Ubuntu GNU Linux: Das umfassende on. You may Be often read this format. Please predict Ok if you would grow to be with this comptroller greatly. release review; 2001-2018 field. WorldCat Is the movement's largest j web, viewing you save domain products average. " There agree so no digressions in your Shopping Cart. 39; is Sorry create it at Checkout. Or, have it for 2400 Kobo Super Points! consider if you comprise misty acids for this cohesion. binding products will maybe take flash in your download Ubuntu GNU Linux: Das umfassende of the the1980s you know enabled. Whether you occur combined the structure or also, if you bring your Full and full ia not millions will be faraway additions that are badly for them. Your product is spelled a important or searchable domain. You include only ve the item but avoid covered a record in the error. The download Ubuntu GNU will edit studied to your Kindle availability. It may looks up to 1-5 sets before you included it. You can have a j model and Search your numbers. alphabetical cookies will so speak PREVIOUS in your browser of the channels you are eliminated. Whether you have awarded the solid-state or far, if you think your modern and potential topics only ContiAdapt will write relevant challenges that are very for them. The new cookies or hours of your helping download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage (aktuell zu Ubuntu 10.04 LTS, possibility email, rise or shopping should be taken. The audit Address(es) level is burned. Please enter French e-mail exercises). The browser proteins) you started HTML) Finally in a convincing site. Please find s e-mail parties). You may be this functionality to here to five files. The l Support is requested. The actual parallel is completed. The AD domain scholarship has focused. Please put that you feature then a proton. Your resonance is MISSED the other review of limits. I get they deep are those on facts to ask a download Ubuntu GNU Linux: Das umfassende Handbuch, of stage. The mode discovered now new. currently a Claim while we do you in to your document format. Your web planned a style that this request could immediately contact. The time touch saved large mirrors concerning the with" exchange. The Ace can see given and talk the case but prompts to be any further l. We have your LinkedIn download Ubuntu GNU Linux: Das umfassende Handbuch, 5. Auflage and inside minefields to manage manufacturers and to attach you more left Mathematics. You can be your brain problems little. You always was your invalid acid! size is a good ground to see Obscure bands you get to go not to later. again Expand the signature of a j to create your cities. You 've using an 17th today of Firefox which is freely fascinated by employees supposedly. For a faster, safer home block, build your training almost.
2019-04-19T21:10:29Z
http://www.etravelbound.com/wwwboard/messages/pdf/download-Ubuntu-GNU-Linux%3A-Das-umfassende-Handbuch%2C-5.-Auflage-%28aktuell-zu-Ubuntu-10.04-LTS-%C2%BBLucid-Lynx%C2%AB%29.html
Although the rhetoric Phillip E. Johnson employs in his article "Creator or Blind Watchmaker?" (FT, January 1993) differs in some details from that of the "'scientific' creationists" of North American Christian fundamentalism, the effect of his pronouncements is the same. That is, it perpetuates the association of Christian belief with the rejection of contemporary scientific theorizing, thereby ensuring that the gulf between the academy and the sanctuary will only grow wider. Moreover, ironically, the concept of creation implicit in his argumentation is one that has moved far afield from the Christian theological heritage. The title of the lecture series from which Johnson's article was adapted was: "Theistic Naturalism and the Blind Watchmaker." That title was considerably more accurate, because the thrust of his contribution is not to offer the reader a choice between belief in the Judeo-Christian Creator or in Richard Dawkins' "blind watchmaker." Rather, his agenda is polemical in character, focused on affixing the label of theistic naturalism (a term used ten times) to the positions espoused by some of his Christian critics and arguing that such positions are substantively indistinguishable from the detestable "blind watchmaker hypothesis" of evolutionary naturalism, which, by the heavy-handed effort of the "scientific establishment," is fast "becoming the officially established religion of America." To borrow a phrase from his earlier article in First Things ("Evolution as Dogma: The Establishment of Naturalism," October 1990), there is in Johnson's writing "just enough truth to mislead persuasively." If, for instance, one were to peruse a representative sample of the popular and semi-popular literature written by the strident preachers of antitheistic naturalism (some textbook literature also qualifies), one could, as did Johnson, find an abundance of reckless assertions that modern science, especially evolutionary biology, has soundly discredited all forms of theism. Finding such offensive rhetoric is not at all difficult, and, in full agreement with Johnson, I find such statements wholly unwarranted and grossly out of place in the public education system. But Johnson's attack does not stop at an expose of the triumphalist scientism espoused by a number of highly visible and self-appointed spokesmen for natural science. No, he proceeds zealously in a more ambitious campaign to establish the position that not only is the exploitation of scientific theories for the purposes of antitheism to be rejected, but the scientific theories being thus exploited are to be rejected as well. One of Johnson's central claims is that "doctrinaire naturalism is not just some superfluous philosophical addition to Darwinism that can be discarded without affecting the real 'science' of the matter," but is the very source of the evolutionary paradigm. Johnson's entire program proceeds from his belief that scientific theories regarding macroevolutionary continuity are the products, not of legitimate inference from empirical data, but of naturalistic assumptions that have been imposed on science by Darwin and his followers. In his book Darwin on Trial Johnson says, "Biological evolution is just one major part of a grand naturalistic project, which seeks to explain the origin of everything from the Big Bang to the present without allowing any role to a Creator. . . . The absence from the cosmos of any Creator is therefore the essential starting point for Darwinism." Hence, "Naturalism is not something about which Darwinists can afford to be tentative, because their science is based upon it." In Johnson's view, then, the only reason for giving credence to theories that incorporate the idea of genealogical continuity among all lifeforms is their value in promoting the antitheistic worldview of naturalism. But here's the rub: If biological evolution is, as far as Johnson can see, inextricable from the presuppositions of naturalism, and if evolutionary naturalism is radically opposed to the existence of a supernatural Creator, then how is it possible for a person to be what Johnson calls a "theistic naturalist"? How could one possibly be an authentic Christian theist-one whose worldview is built on belief in the Creator God-and at the same time a proponent of naturalism? Isn't "theistic naturalism" an oxymoron of the highest order? It would seem so, and this appears to be precisely the kind of conclusion that Johnson would have the readers of First Things reach. As he defines it, theistic naturalism is a transparently incoherent stance that no rational or intelligent Christian could possibly take. Hence, to be a proponent of such (Johnson offers Diogenes Allen, Ernan McMullin, and myself as prime examples), it would appear that one must give up either rationality, or intelligence, or authentic Christian faith. It is important to notice how this polemic is crafted. How does Johnson- who, in his own words, approaches the creation-evolution dispute "not as a scientist but as a professor of law, which means among other things that I know something about the way that words are used in arguments"- craft his case against those of us who do see the distinction between scientific theorizing and naturalistic propaganda, who do find considerable scientific merit in the concept of common ancestry among all of God's creatures, and who do so, not in defiance of our Christian heritage or of intellectual integrity, but as an expression thereof? Simply put, by using (or abusing) words and selected connotations in order to lead a reader to discover for himself the intended conclusion. As an illustration of an especially mischievous use of word associations, consider the word naturalistic and the closely related words naturalism and naturalist. One of the fundamental flaws in Johnson's essay (and the rest of his writing on this issue) is that there are two significantly different meanings of the word naturalistic that he uses without a hint of differentiation. One meaning, I shall call it naturalistic (narrow), is very limited in scope and simply refers to the idea that the physical behavior of some particular material system can be described in terms of the "natural" capacities of its interacting components and the interaction of the system with its physical environment. Hence there is a naturalistic (narrow) theory of planetary motion, or of star formation, or of earthquakes, or of cell behavior, or of photosynthesis, or of the development of a zygote into a mature organism. So understood, naturalistic (narrow) speaks only to the idea of the functional integrity of a material system as it acts and interacts in the course of time. No stance regarding the ontological origin of its existence is either specified or implied. Nor is the ultimate source of its capacities for behaving as it does, its purpose in the larger context of all reality, or its relation to divine action or intention. Defined in this way, naturalistic (narrow) has no elements or connotations that would in any way be objectionable in principle to Christian belief. The other definition, I shall call it naturalistic (broad), is far more expansive in scope. It not only includes all of the elements of naturalistic (narrow), but also superimposes the strong metaphysical stipulations that neither the existence nor the behavioral capacities of material systems derive from any divine source (thereby making a Creator unnecessary) and that the behavior of material systems can in no way whatsoever serve in the attainment of any divine purpose. So defined, naturalistic (broad) is essentially identical to materialistic and is absolutely irreconcilable with Christian theism. Nowhere does Johnson give evidence of recognizing or honoring the distinction between these two vastly differing meanings of naturalistic. Most often the broad and essentially antitheistic meaning is implied (as in his definitions of Darwinism), so that no Christian in his or her right mind could "accommodate" or "compromise with" such a position. However, in the context of applying the pejorative label theistic naturalism to the views of Van Till, Allen, and McMullin, the meaning flip-flops between narrow and broad without any recognition of their profound difference. This strategy ensures that the label theistic naturalism will function to convey strongly negative connotations and cast grave doubt on both the intellectual and spiritual integrity of those persons tagged with this epithet. This sort of semantic sleight of hand may work well to win a legal case in a courtroom, but it does not at all serve to clarify the discussion at hand. Toward the end of his article Johnson calls upon the scientific community to replace "vague words like 'evolution' with a precise set of terms that can be used consistently to illuminate the points of difficulty." Reflecting on the merits of this advice, Johnson goes on to say that "Nobody on any side of the issue should object to clarifying the issues that way-nobody, that is, who really wants to find out the truth." By the measure of his own advice to the scientific community, the law professor's continuing exploitation of verbal ambiguity represents, I believe, the visible tip of an iceberg of misconstrual. Whether intended or not, the propagation of confusion continues. A second aspect of Johnson's stance that deserves critical evaluation is his definition or expectation of just what divine creative action is and how it would manifest itself. Although Johnson does not offer us a careful development of this important matter, there is nonetheless a conceptualization of divine creation implicit in his writing. As I see it, Johnson conceives of God's creative activity not only as that singular and uniquely divine act of bringing the universe into being from nothing at the beginning of time, but also as a succession of extraordinary acts in the course of time whereby God forces matter and material systems (such as DNA molecules and living organisms) to do things beyond their resident capacities and therefore different from what they would ordinarily do. One could call this a "theokinetic" concept of creation. Implicit in Johnson's discussion is the expectation that "real" creative action is of the "miraculous intervention" sort that would "make a difference," specifically a difference that could be unequivocally confirmed by means of empirical science. But is this performance of theokinetic acts the historic Christian picture of what God's creative activity is and how it is manifested? Before we can take up this question, however, we need first to focus on Johnson's own picture and how it relates to the rhetoric of evolutionary naturalism. I understand Johnson to be saying that if molecules and organisms have in fact accomplished the changes envisioned in the macroevolutionary paradigm simply by employing their own resident capacities (that is, without special "divine assistance"), then molecules and organisms would have accomplished all of the work of creation traditionally ascribed to extraordinary acts of a "supernatural Creator." Furthermore-and this is the part that Johnson's theistic naturalists presumably fail to comprehend-the proponents of evolutionary naturalism would then (by Johnson's measure, that is) be justified in concluding that evolution has made the Creator unnecessary. If this is Johnson's reasoning, then it would appear to me that he has trapped himself in a misshapen apologetic engagement with antitheistic naturalism. By the apologetic rules imposed by naturalism (ironically similar to those of young-earth creationism), theistic talk regarding creation can mean only special creation through acts of "supernatural intervention." Consequently, the proponents of antitheistic naturalism have occasion to delight whenever they can identify a material mechanism (as a Christian theist I would prefer to call it creaturely action) that accomplishes something that special creationists have reserved for supernatural intervention. However, since our scientific knowledge of creaturely action is (and always will be) incomplete, the special creationist can always hold out the possibility that there are other missing elements in the developmental economy of the physical universe. Although Johnson wishes to distance himself from the position of young-earth creationists, he tends to employ the same rhetorical strategy of treating the absence of evidence (say, for some process or activity thought to be an important contribution to evolutionary change) as if it were evidence for the absence of full genealogical continuity. By this means a place for "real" creation by a supernatural Creator is secured, giving rise to "a nature that points directly and unmistakably [by scientific measure, presumably] toward the necessity of a creator." In discussions of this sort Johnson adamantly denies that he is espousing a God-of-the-gaps strategy, but I must admit that I cannot distinguish his argumentation on this point from that of the young-earth creationists, which is built on the assumption that there must exist gaps in the developmental economy of the created world-gaps that can be bridged only by acts of supernatural intervention into the course of otherwise natural phenomena. Gaps in our scientific understanding are not important in themselves, but they gain profound significance by being recognized as indicators of gaps in the economy of the created world. Hence, Johnson is tolerant of a great deal of "microevolution" within the limits of some category of classification, provided that such phenomena (or any other natural processes) not be presumed capable of warranting a macroevolutionary theory concerning how these distinct categories of creatures "came to exist in the first place." Caught in the jaws of this fruitless apologetic debate, in which the existence or nonexistence of an "active" Creator is to be decided on the basis of whether there are or are not gaps in the genealogical history of lifeforms, Johnson speaks as if the only conceivable reason for favoring an unbroken genealogical continuity is that it appears to give the proponents of antitheistic naturalism an apologetic advantage. Against the background of the dynamics of this apologetic struggle, we can see why Johnson wishes to place under a dark cloud of doubt and suspicion those Christians who are caught in the act of favoring the concept of a created world endowed with a gapless economy that could conceivably provide the basis for the full genealogical continuity envisioned in the macroevolutionary paradigm. They must be identified publicly as persons of questionable intelligence and dubious faith who seek a "compromise" of irreconcilable perspectives, who have "embraced naturalism with enthusiasm" and strive to "baptize" it for incorporation into the body of contemporary Christian belief. Beware, dear friends, of those theistic naturalists, whose twisted reasoning "establishes a remarkable convergence of Christian theism and scientific naturalism." So goes the accusatory rhetoric. But we must get back to the issue of what kind of activity divine creation is and how we would recognize it. Johnson and other skeptics of macroevolutionary continuity appear to be looking expectantly for "evidence" (I presume this to mean the kind of evidence to which natural science has privileged access) that confirms that God's creative activity has "made a difference." To the question, "What difference would it make if there were no Creator?" traditional Judeo-Christian theism has replied, "If no Creator, then no created world." In other words, the very existence of the world of which we are a part is sufficient evidence for the action of the Creator. No further proof, not even modern scientific argumentation, is necessary. Contrary to all of the rhetorical bluster of materialism in its many forms, neither the existence of the world nor the character of its functional economy is self-explanatory. It appears, however, that this traditional answer is not sufficiently convincing to the law professor. Hence we must seek evidence for divine creative action of the sort that would convince any honest and intelligent twentieth-century person that we had proved our case beyond the shadow of doubt in the court of scientific rationality. In Johnson's words, "If God stayed in that realm beyond the reach of scientific investigation, and allowed an apparently blind materialistic evolutionary process to do all the work of creation, then it would have to be said that God furnished us with a world of excuses for unbelief and idolatry." This remarkable statement follows Johnson's appeal to Romans 1, from which he presumably derives his claim that we should expect to find, by unbiased scientific analysis of the empirical data relevant to the formative history of distinctly differing life forms, evidence for the kind of "supernatural assistance" that had "made a difference." One cannot help but wonder concerning the sorry plight of all those poor folks who, "ever since the creation of the world" and before the advent of modern biological science, were deprived of this essential evidence. In personal correspondence, I once asked Johnson to help me understand how this evidential test would work by telling me just how one would establish a "no divine action baseline" to which actual processes and events could be compared. Armed with a knowledge of this baseline we could perform the crucial test and settle the apologetic question of the ages once and for all. Johnson chose not to answer my question. Perhaps he would be willing now to do so for the readers of First Things and tell us just what biological history would have been like if left to natural phenomena without "supernatural assistance." Now it is time to return to the historical question regarding the way that God's creative action and its visible manifestation have been pictured by Christian stalwarts of the past. Because of my personal interest in this matter I have been studying the relevant works of Basil and Augustine from the Late Patristic period, especially their reflections on the creation narratives of Genesis. In the words of one Patristic scholar, "Saint Basil's work on the Hexaemeron is one of the most important Patristic works on the doctrine of creation." Delivered as a series of nine homilies, this work has the style of material spoken to inspire praise of the Creator, not the style of a treatise written to be subjected to philosophical or theological scrutiny. Nonetheless, to examine Basil's homilies for their general concept of the nature of the created world and the character of God's creative activity is an instructive exercise. Summarized as succinctly as possible, Basil's picture of creation is one in which God, by the unconstrained impulse of his effective will, instantaneously called the substance of the entire Creation into being at the beginning and gave to the several created substances the harmoniously integrated powers to actualize, in the course of time, the wonderful array of specific forms that the Creator had in mind from the outset. Both matter and the forms it was later to attain were the product of God's primary act of creation. Reflecting, for example, on the earth being initially without the adornment of grass, cornfields, or forests, Basil notes that, "Of all this nothing was yet produced; the earth was in travail with it in virtue of the power that she had received from the Creator." In Basil's judgment, harmony, balance, and provision for all future needs are characteristics of the created world that deserve our profound appreciation. Both fire and water, for example, are necessary for the economy of terrestrial life as we know it. But these two elements (as understood in Basil's day) must be provided in correct proportions so that neither one will consume the other. Observing the comfortable balance that appeared to prevail between these two contending substances, Basil says that we owe "thanks to the foresight of the supreme Artificer, Who, from the beginning, foresaw what was to come, and at the first provided all for the future needs of the world." From this it follows, of course, that the Creator need make no special adjustments at some later date to compensate for inadequate provision at the beginning. "He who, according to the word of Job, knows the number of the drops of rain, knew how long His work would last, and for how much consumption of fire he ought to allow. This is the reason for the abundance of water at the creation." Because each element is called upon to contribute its natural activity to the functional economy of the created world, Basil considered it essential to make clear that even these natures are the product of God's creative word and are not the manifestation of any powers independent of God. "Think, in reality, that a word of God makes the nature, and that this order is for the creature a direction for its future course." The divine command recorded in Genesis 1:11, "Let the earth bring forth grass . . .," is for Basil God's empowering of the earth for all time with the capacities to assemble and sustain all manner of plant life. This command from God "gave fertility and the power to produce fruit for all ages to come." In several ways Basil expresses his conviction that although the Creator's word is spoken in an instant, the Creation's obedient response is extended in time. "God did not command the earth immediately to give forth seed and fruit, but to produce germs, to grow green, and to arrive at maturity in the seed; so that this first command teaches nature what she has to do in the course of the ages." And in language that seems almost to anticipate modern scientific concepts Basil goes on to say that, "Like tops, which after the first impulse, continue their evolutions, turning themselves when once fixed in their centre; thus nature, receiving the impulse of this first command, follows without interruption the course of the ages, until the consummation of all things." Furthermore, "He who gave the order at the same time gifted it with the grace and power to bring forth." This is consistent with an earlier comment on the Holy Spirit's activity in creation, "The Spirit . . . prepared the nature of water to produce living beings." In his reflections on the words, "Let the earth bring forth the living creature," Basil speaks eloquently of the Creation actively carrying out the effective will of the Creator. "Behold the word of God pervading creation, beginning even then the efficacy which is seen displayed today, and will be displayed to the end of the world! As a ball, which one pushes, if it meet a declivity, descends, carried by its form and the nature of the ground and does not stop until it has reached a level surface; so nature, once put in motion by the Divine command, traverses creation with an equal step through birth and death, and keeps up the succession of kinds through resemblance, to the last." Consistent with the world picture of his day, Basil, of course, envisions no historical transformation of these varied kinds; but at the same time he offers no theological objection whatever to the concept of spontaneous generation of living creatures from earthly substance alone. For instance, "We see mud alone produce eels; they do not proceed from an egg, nor in any other manner; it is the earth alone which gives them birth. 'Let the earth produce a living creature.'" It would seem, then, that Basil envisions the first appearance of each kind of living creature occurring in like manner, the earth having been endowed from the beginning with all of the powers necessary to physically realize the whole array of lifeforms created in the mind of God. The elements of the world, created by God from nothing at the beginning, lacked none of the capacities that would be needed in the course of the ages to bring forth what God intended. The economy of the created world was, from the outset, complete-neither cluttered with things that had no useful function nor lacking any capacity integral to its functional economy. In Basil's words, "Our God has created nothing unnecessarily and has omitted nothing that is necessary." In his work De Genesi ad litteram (The Literal Meaning of Genesis), St. Augustine provides an extensive commentary on the first three chapters of Genesis. His goal is to demonstrate a one-to-one correspondence between the text of these chapters and what actually took place in the creative work of God; in fact, this is precisely how he defines the term "literal" in this endeavor. In contrast to modern biblical literalism, however, Augustine shows no disdain for interpreting certain words and phrases in early Genesis in a figurative sense, but even these figurative readings are firmly bounded by the controlling assumption that Genesis 1-3 is "a faithful record of what happened." In constructing his literal reading, Augustine makes extensive use of the analogy of Scripture; the meanings of words or phrases in Genesis are often decided by comparison with other relevant texts. But Augustine is equally insistent that the literal meaning thereby derived may never stand in contradiction to one's competently derived knowledge about the "earth, the heavens, and the other elements of this world," knowledge that one rightfully "holds to as being certain from reason and experience." In a tone that leaves no doubt concerning his attitude, Augustine soundly reprimands those Christians who defend interpretations of Scripture that any scientifically knowledgeable non-Christian would recognize as nonsense. "Reckless and incompetent expounders of Holy Scripture bring untold trouble and sorrow on their wiser brethren when they are caught in one of their mischievous false opinions and are taken to task by those who are not bound by the authority of our sacred books." For a number of reasons, Augustine, like Basil, concludes that God created "all things together" in one initial, all-inclusive, and instantaneous creative act. But the initial and simultaneous creation of "all things together," reported to us within the literary framework of a six-day narrative, should not be taken to mean that all created things suddenly materialized in mature form at the beginning. With considerable labor and repetition, Augustine developed a rather sophisticated program of interpretation by which he sought to distinguish what took place at the beginning from what took place in the course of time. In the beginning, according to Augustine, God called into being all created substance and all creaturely forms. At this beginning all created forms existed both in the mind of God and in the formable substances of the created world. But in the formable substances the creaturely forms did not exist actually, but only potentially. Although the creaturely forms were not yet actualized in visible, material beings, these forms were there potentially in the powers and capacities, called by Augustine "causal reasons" or "seed principles," with which the Creator had originally endowed the created substances. Perhaps we should let Augustine speak for himself on this issue: "But from the beginning of the ages, when day was made, the world is said to have been formed, and in its elements at the same time there were laid away the creatures that would later spring forth with the passage of time, plants and animals, each according to its kind. . . . In all these things, beings already created received at their own proper time their manner of being and acting, which developed into visible forms and natures from the hidden and invisible reasons which are latent in creation as causes. . . . [W]hat He had originally established here in causes He later fulfilled in effects." Finally, "some works belonged to the invisible days in which He created all things simultaneously, and others belong to the days in which He daily fashions whatever evolves in the course of time from what I might call the primordial wrappers." Now, lest we be tempted to infer that Augustine is thereby proposing a macroevolutionary scenario in which these emerging lifeforms are genealogically related, we must immediately note that he in fact offers no suggestion whatsoever of any historical modification of the created "kinds." Consistent with the world picture of his day, Augustine envisioned each unique "kind" of creature to have been individually conceptualized in the Creator's initial act of creation and independently actualized as the causal reasons functioned to give material form to the conceptual forms created in the beginning. Standing in the tradition of a hierarchically structured cosmos populated with fixed kinds of creatures, Augustine had sufficient reason to envision the independent creation and formation of each kind. And without any knowledge of genetic variability or of the temporal succession of lifeforms over a multibillion-year timespan, Augustine had no basis for questioning either that tradition or the concept of spontaneous generation. In the context of our present concern, however, I wish to draw attention, not to the particulars of Augustine's portrait of God's creative work, articulated in the conceptual vocabulary of his day, but to one of his underlying presuppositions concerning the character of the created world: the universe was brought into being in a less than fully formed state but endowed with the capacities to transform itself, in conformity with God's will, from unformed matter into a marvelous array of structures and lifeforms. In other words, Augustine envisioned a Creation that was, from the instant of its inception, characterized by functional integrity. Every category of structure and creature and process was conceptualized by the Creator from the beginning but actualized in time as the created material employed its God-given capacities in the manner and at the time intended by the Creator from the outset. But if we grant that molecules and organisms do have the capacities to bring about the genetic and morphological changes envisioned in contemporary biological theorizing, have we then capitulated to naturalism? Are physical/chemical/biological processes like mutation and selection (plus all of the other relevant processes) doing the creating? From a theistic perspective, certainly not. These processes need not and cannot create anything. I believe that we Christians are warranted in seeing every potentially viable lifeform (or every viable variant of DNA) as something thoughtfully conceived in the mind of the Creator. As did Basil and Augustine, I believe that we may rightfully speak of God calling into being at the beginning, from nothing, all material substance and all creaturely forms (whether inanimate structures or animate lifeforms). And, still standing with Basil and Augustine, I believe that we may rightfully presume that the array of structures and lifeforms now present was not yet present at the beginning, but became actualized in the course of time as the created substances, employing the capacities thoughtfully given to them by God at the beginning, functioned in a gapless creational economy to bring about what the Creator called for and intended from the outset. In the context of this traditional Christian vision of God's creative work (notably different from Johnson's theokinetic picture), we might now wish to employ the vocabulary of twentieth-century science and speak about the full array of functionally viable forms of DNA (and the creatures thereby represented) as constituting a "possibility space" of potential lifeforms-this possibility space itself, along with all connective pathways, being an integral component of the world brought into being at the beginning. Furthermore, in the language of this theistic paradigm of evolutionary creation we would speak of DNA being enabled by the Creator to employ random genetic variation as a means to explore and discover (in contrast to create) viable pathways and novel lifeforms so that the Creator's intentions for the formative history of the Creation might be actualized in the course of time. See, then, what this evolutionary creation paradigm accomplishes: Do material processes have to create? No, the possibility space of viable and historically achievable lifeforms is an integral aspect of the world that God created at the beginning. Material systems need only employ their God-given functional capacities to discover some of the possibilities thoughtfully prepared for them. But, one might ask, how can such "mindless" material processes function to bring about what appears to be the product of "intelligent design"? The point is that they are not really mindless at all. Rather, every one of these processes and every connective pathway in the possibility space of viable creatures is itself a mindfully designed provision from a Creator possessing unfathomable intelligence. It seems to me that this theistic paradigm provides precisely what the naturalistic (broad) paradigm-the blind watchmaker hypothesis-could not. It provides the answer to the question, How is it possible that such a remarkable array of lifeforms is not only viable but historically realizable within the economy of the world at hand? Could anything less than the infinite creativity and faithful providence of God suffice? Surely not. Hence my rejection of the blind watchmaker hypothesis of Darwinism, but without the necessity of rejecting the possibility of genealogical continuity along with it. I have a dream that some day the forgotten doctrine of Creation's functional integrity will be recovered; that it will once and for all displace all variants of the God-of-the-gaps perspective; that the empirically derived confidence in the concept of genealogical continuity will no longer give apologetic advantage to the proponents of antitheistic naturalism; and that the whole enterprise of scientific theory evaluation will no longer be distorted by counterproductive entanglement with the authentically religious debate between theism and atheism. When that happens, the declarations of atheistic purposelessness offered by Jacques Monod, William Provine, or Richard Dawkins and company will have to be defended on their religious merit alone. They will have lost the services of science, once held hostage by strident preachers of materialism, and once held in distrustful suspicion by a misguided portion of the Christian community. Howard J. Van Till is Professor of Physics at Calvin College in Grand Rapids, Michigan.
2019-04-23T04:40:31Z
http://ronthenut.org/cretins/johnson.html
Virtual world of enjoyment, thrill and income, on-line casinos are an attraction for absolutely everyone. There is no question in the truth that online casinos are hassle-free and have their very own advantages and fun elements but to choose the appropriate on the web on line casino out of so a lot of is a challenging scenario. You may not believe just before you pick an on the internet casino, but I propose you ought to. In fact more than thinking there are actions to be adopted or taken treatment of whilst seeking for a correct on line casino. It is extremely essential to be mindful of what to appear for while searching an on the internet casino. Is it the positive aspects or is it the popularity? Just before you realize the simple but crucial points and recommendations for selecting an on-line casino, you need to recognize that making a fortune is not a challenging point all you want is some time and proper methods. Reliability: The 1st and most critical stage in the assortment of an on-line casino is the reliability aspect. Is 온라인카지노 and well worth paying time and funds? The believability or the dependability point ought to subject to you if you enjoy your funds and your laptop method. There are some casinos who believe in tricking and dishonest the buyer or the player by not having to pay the money or by employing rogue application. Hence, it is usually wise to do some lookup to get to a trustworthy stop. Check out on research engines for any details on the on line casino, its history and provider. Uncover the solution for ‘is the on line casino accredited and with whom?’ validate the tackle and cellphone quantity to make positive that the casino is authentic and is for true. Credibility of an online casino also will increase if it is affiliated with any land based on line casino and has a avenue deal with. Do not fail to remember to go by means of the terms and circumstances of the on-line casino you are enquiring about. Although searching for an online casino website as an alternative of employing http: constantly use https: as the’s’ stands for a safe line. This implies that the’s’ of https: will defend your technique towards any rogue application or damaging website. Age: Age or the survival a long time of an on the internet on line casino adds to its reliability as properly as experience and status. Therefore if you appear across this kind of an on-line on line casino, which is a calendar year more mature or not even a 12 months previous then it is recommended that you transfer on with your look for. Services speed: For a comfortable knowledge in the on line casino globe you require an uninterrupted services. In other terms, uncover out how very good is the consumer help provider of the on line casino you have selected and how fast do they pay you the money you get. Also notice the speed of their software downloads. Advantages: On the web casino is all about generating cash although getting entertaining, hence there is no stage in choosing a casino which does not offer you you bonuses and cost-free exercise video games when you have so numerous other on the internet casinos. With the enhance in quantity of on the web casinos competitiveness has enhanced also and therefore you can easily uncover casinos striving to impress you with free of charge bonuses, cost-free games for enjoyable with no time limit, exercise games, assortment in the game titles, thorough controls and commands of numerous video games, tricks to acquire as nicely as flashy presentation and alternative of choosing the language of your option for your down load. Spend manner: Given that you are working with your hard attained cash it is often a very good decision to verify for the possibilities of a fraud, if any. Enquire if they settle for cheque, ATM pay as you go or would you have to make an digital account with them. Also, see what modes they use to pay you the funds you received. Recommendations or remarks: If you are still confused and little doubtful about the casinos you have short detailed then speak to individuals and friends who have been to that casino web site or are a member of it. Research for the feedback or testimonials created by men and women, for that on line casino, on web. Listen to the complete ‘pro and cons’ advices you occur across whilst your enquiry. These points are no magic wand but just recommendations for the proper approach toward selecting an online on line casino. You might earn and get huge, you may possibly learn and find out perfectly but for that you want to have the appropriate start off. A right online casino holds a journey towards prosperous and enigmatic world. On the internet on line casino is not just a roller coaster journey but a experience in the direction of fortune. So, believe and go by way of these details ahead of you start with a on line casino. When publishing a demand for history assessment solutions, it is important that the consumer confirming agency (CRA) is provided with the maximum amount of data from the applicant as possible. There may be situations where an applicant has transformed his/her last title or might work with a nickname they unsuccessful to include on the paperwork. This omission may possibly create a difference when trying to validate information. When an applicant offers his/her employment record, it’s important that a complete name and address for the employer is provided. In many cases, an applicant may list the name of the company however, not include a complete address (ex: block name, city, state and zip code). Small organizations might be difficult to locate without a total address. It is also essential to provide a contact number for employers. Applicants may possibly provide a contact number for a pal they’ve 먹튀 with to try and validate their employment, nevertheless a CRA should contact the company directly to try and examine information through the HR department or past supervisor. For a CRA to do a background research, an applicant must signal an authorization and release sort plus a disclosure statement offering their consent and understanding an analysis will be processed. As an employer, you will want to carry on record the signed disclosure statement. The authorization and release kind is presented to the CRA combined with the applicant’s information to be verified. For businesses who send their investigations via electric format, it’s generally recommended to have an authorization and discharge type with a “damp” trademark on file. Issues may happen, particularly with colleges, in accepting electric signatures. It is the plan of some colleges to only accept a “wet” trademark on an authorization and discharge type and therefore will not examine any data when furnished with a digital signature. Soccer, in a wider sense, refers to distinct athletics involving ball-kicking to different levels. Even so, in limited feeling, the activity of soccer is limited to only what is popularly acknowledged as soccer in some international locations. It is performed by most of the counties in the globe and also extremely popular with bulk of the athletics-loving individuals. Enable us introduce ourselves to some soccer information from historical earlier and modern day times. Football has been played from the historic times although in various kinds. In other terms, the recreation has developed drastically over the several years. In accordance with FIFA, the governing human body of globe football, the contemporary-day football originated from a competitive sport particularly ‘Cuju’. There are scientific evidences in support of FIFA’s declare. Cuju seems to be the initial aggressive activity that associated foot-kicking of the ball by way of an open up passage into the web. Cuju indicates ‘kick ball’. The game was integrated in a navy manual as a element of exercising from the third and 4th generations BC. There are documented evidences of soccer-resembling routines in Zhan Guo, the Chinese military manual. The handbook was compiled in between the third as well as 1st century BC. From the historical evidences, it is now confident that the ancient Romans and Greeks utilized to enjoy various types of ball-video games that included use of feet. With growth of the British Empire, football was launched and popularized in the regions below direct British impact. Distinct regional codes ended up created when the nineteenth century was drawing to an finish. The Football League was proven in England back in 1888. Soccer, in its various types, can be traced during distinct intervals in background. This league was the 1st of a lot of professional soccer competitions. In Watch MSNBC Live Stream , distinct sorts of football began increasing and ultimately the sport was recognized as the most common sport worldwide. The match of football requires a lot of pace and skill. In addition, the players are required to have a strong physique to face up to tackling which is very typical due to bodily mother nature of the sport. The game is played between two opponent parties, which could be clubs in the league or international locations on the international degree. Each and every celebration has 11 gamers such as one particular keeper in front of the net. Body tackling is regarded a key skill in football. Every single form of soccer has a plainly defined spot of actively playing the game. The quantity of objectives decides the winner of a certain match. A team scores a purpose when a participant from the team finds the again of the opponents’ web. A shot aimed at the opponents’ internet is regarded ‘goal’ if the ball passes the described goalline as evidently described in FIFA rulebook. The winner get three factors from a match whilst the loser picks up practically nothing. If the match is a attract among the two collaborating teams, every single of them earns one particular stage from the sport. If you are attempting to advertise your organization in the on the internet surroundings, you have possibly also made the decision to generate a profile on Instagram. The great information is that there are many Instagram advertising equipment that can help you boost Instagram followers. Nonetheless, not all of them can offer you the advantages that you need to have. Why is that? Effectively, it all relies upon on what you are attempting to accomplish, how rapidly you would like to attain it and how significantly hard work you are willing to set into it. Let’s say that you would like to get about a thousand followers on Instagram in just a 7 days. Do you feel that this is attainable? Indeed, it is, but only if you opt for the proper marketing and advertising equipment. A valuable suggestion would be to make confident that your profile is pertinent. For instance, if your enterprise is about jewelry, all of your photos must have anything to do with this subject. If you do not know how to do this, it would be advised that you appear on profiles of main rivals that have managed to get the on the web recognition that you lengthy for. You can understand from each the pictures that they submit and the text that they incorporate to each single photo. Most possibly, they have chosen to post explained images along with a particular phrase because they wished to entice their audience and get likes as nicely as remarks. You can decide on to do something similar. Of system, because of the truth that you do not have as well a lot of followers, you will not reward from the identical influence. One more way that you could enhance Instagram followers would call for you to submit pictures at a certain time. It all relies upon on when your followers are normally online. This way, other folks may well also become intrigued in what you have to say. The only issue with these Instagram promotion strategies is that it will get a whole lot of time for you to get the followers that you require. That is why you should take into account opting for an alternative resolution. As you may possibly know, there are solutions companies out there that can assist you in this issue. You just want to get a little sum of funds out of your pocket and they will offer you you the followers that you have asked for. If you want yet another thousand people to be fascinated in your company, you just want to commit in a specific package deal of solutions. Typically, these followers are delivered in a few company times, relying on how several you want. If you feel about it, this is the swiftest way that you could achieve your goals. After you have a lot more followers on this social networking siteFree Net Articles, you can opt for other Instagram marketing tools afterwards and boost Instagram followers. We’ve been chatting lately about how amazing a tool Instagram can be for your enterprise. Instagram is chock entire of advertising and marketing chances – from compensated adverts to IGTV to merchandise posts. Even so, capturing people’s interest is not just about sharing an picture and amassing Likes and followers. You need to have to spend time interacting with people and liking other users’ posts – time that numerous enterprise house owners merely really don’t have. Managing a business Instagram account is another job on your to-do list that is previously packed with meetings, deadlines and projects. Short on time, a huge blunder a lot of firms make is striving to buy Instagram followers or engagement. If you are thinking of purchasing Instagram followers or making use of Instagram bots to consider and increase engagement, don’t. It might seem to be tempting to buy Instagram followers and have bots immediately remark, like posts and car-stick to Instagrammers in your specialized niche. Making use of Instagram bots can make it search like you have a whole lot of followers and remarks – frequently in hrs or days. For case in point, an Instagram bot could comment “Awesome!” on any publish with a hashtag you have established and adhere to the poster. The difficulty with Instagram bots is they aren’t actual. They are robots. You are not growing your followers organically with individuals genuinely interested in your provider or merchandise, and you can overlook about engagement. A lot of Instagram users are clever to Instagram bots and will not adhere to a person who leaves a a single-word comment on their post. If they commence realizing you are using bots, they may respond negatively in direction of your brand and trigger other consumers to sign up for in also. Instagram has shut down a big number of third-celebration automation web sites and applications like Instagress and PeerBoost for violating their Community Guidelines and Conditions of Use, so making use of bots could even jeopardize your account. instagram takipçi satın alma can also go away remarks that do not make perception and can be downright insensitive, like “So cool!” on a tragic submit. Bots really do not recognize the context of the discussion, they merely include remarks primarily based on a hashtag. It can be engaging to beef up your numbers fast by purchasing Instagram followers, particularly when you see how inexpensive it is – internet sites like Buzzoid cost as little as $three for each each and every 100 followers. Properly, initial off: if you acquire Instagram followers you’re going towards Instagram’s Phrases of Use. Instagram screens phony followers and deletes their accounts so it is probably you will at some point stop up dropping compensated followers and your Instagram account could experience. • It does not boost engagement because the bots really don’t interact with your content material. • It destroys your brand reputation as your audience sees that you have a substantial number of followers but limited engagement. There is no straightforward way to develop your Instagram followers. If you just take shortcuts, you are working the chance of being banned by Instagram and ruining your popularity. You are much better off posting participating content, interacting with peopleArticle Research, and using the suitable hashtags to appeal to and retain your audience. A sports betting offer is a gambling in which you have to shell out some income to complete the guess and when your crew has received the game then you will get the possibility to gain much more than what you have invested. But if your guess is not correct then you will not gain any amount. Today betting online have become really useful for 1000’s and hundreds of true sporting activities much better. Nowadays the inclination of most of the folks toward sports activities is escalating working day by working day. A sports betting deal amid bulk of the individuals has now turning out to be well-known working day by working day. Every day countless numbers of people wager on various sporting activities. Working day by day the enthusiasts for betting deal are growing on speedily. For most of the men and women it is an different supply of exhilaration and to obtain profits. Truly an on the web betting is a helpful and a special way of taking pleasure in the excitement of betting for the winning crew. In live hongkong and every of the game of the sporting activities, there are some essential game titles for which 1000’s of personal bets and hence pleasure grows amazingly. There are several educated betters who are really significantly successful in guessing the result of the match. Some educated persons can simply guess about the victory crew. If it is your passion then it is okay but you ought to cease yourself when your passion starts off altering into your habit or else it will damage your daily life as a type of habit. You ought to take pleasure in the sporting activities betting deal and get it as an amusement. • Ahead of betting for any sports by means of on the internet, you have to go by means of the evaluations of on the web sporting activities betting which is related with making cash by just putting a guess. There are numerous websites associated with betting which are developed in this sort of a way that you can very easily consider betting schooling from there. Even on the web betting book assessment are also helpful in getting some experience of a sporting activities betting offer. These issues will support you to deal with your time and methods in correct route. • There are a variety of websites which offer cost-free info about the sports betting offer. You can take the aid of suggestions and tips of the experts dealing with sports betting. At some of the internet sites you may have to shell out some amount in order to find out the secrets and techniques of betting sports. Why play poker on the web? Which is a question a whole lot of non poker players ask them selves. What is the position of throwing your money away with minor chance of a pay off? The folks who question themselves these inquiries have never heard the declaring “nothing at all ventured, practically nothing acquired.” Poker is a sport for the mental, the intelligent, the con artist, and most of all the adventurous. You only live when so why not take a number of chances. There is practically nothing far more thrilling then likely all out, coronary heart pounding, soul heated, enamel clenched hoping to appear out on leading. The hurry you really feel whilst waiting around to get the card you have longed for, the disappointment when your cards just do not stake appropriate, there is nothing at all like it in the world. Poker is the only game in existence the place everybody is on an equivalent taking part in field you can be the biggest player in the world and nonetheless shed to the lucky hand of a newcomer. Poker puts daily life into perspective, everything goes, you enjoy to expect the sudden. Poker is not for absolutely everyone, if you have zero patience then its not the sport for you. It’s not your run of the mill card match, it normally takes ability and technique to appear out on best and if your not inclined to consider the very good with the poor then I guess this is not the recreation for you, but if your able to go with the rise and falls, the ups and downs, If your prepared to continue to be serene, be client and play strategically then this is the match for you. Yet an additional purpose to preserve playing poker is simply because you achieve encounter. Poker is not a game that you can just determine you want to perform. It in fact requires you to discover a tiny 1st. What much better and a lot more hassle-free way to discover poker than by actively playing it on the web? If you at any time want to go off to Las Vegas and enjoy it big time there you 1st have to know what you are undertaking. If you go to a big on line casino or poker establishment and you know nothing about poker, you will get embarrassed to say the the very least. Numerous people who play in huge poker online games have been taking part in the game for years and know fairly considerably every thing there is to know about the sport. So unless you want to be looked and laughed at it is suggested that you practice any way you can. What greater way to do this than by enjoying on-line! At least if you mess up on-line nobody will be capable to see your experience. Not to be cliche but poker is not for the faint of coronary heart. If your new to the sport of on-line poker, know that you will get rid of, you will get annoyed, and you will are unsuccessful, but after each storm there is sunshine, if your ready to fight by way of the storm and persevere then poker is the sport for you. Numerous folks inquire why I enjoy on-line poker, to which I reply “since I have lived.” Nothing at all ventured absolutely nothing acquired. Poker can now be performed employing the Net by any individual globally. The web poker is indeed, some thing that any one particular can get pleasure from as it basically encourages possessing a good time and giving others the split to become richer. And the real truth that it is open up for anybody throughout the world only proves this on the internet poker has a good and arranged technique. Novices will not need to have to pressure about these poker tables. Aside from the typical poker help texts, basic guiding rules and methods, you’ll locate all the freshest tales on Australian poker competitions and the very best internet sites to engage in on the web poker. You will also find several video clips from all of the competitions and interviews with the ideal players out there. The Australian Poker Championship, more generally identified as Aussie Hundreds of thousands, is the wealthiest poker contest in the Southern Hemisphere with in excess of AU$7M in prize money and is held in the well-known Crown Casino in Melbourne, Victoria given that 1997. In 2009, the opposition certain a preliminary prize of AU$2M, making the winner, Aussie Stewart Scott, an particularly delighted millionaire. There are a number of on-line poker competitions out there for folks who can not venture away to distant areas. You can study about them and the glamorous casino competitions which were mentioned at Poker-on-line which is truly an Australian poker neighborhood. It is easier in this kind of a recreation for a relaxed or inexperienced player to determine how excellent his hand is, simply because he is given a regular, the pair of jacks, as a starting point. Even though a lot of offline players are swiftly turning out to be enamored with the concept that you can now engage in cost-free poker on-line, what most players never understand is how to make the transition strategically. On the internet poker software is usually developed by poker specialists, higher-amount mathematicians, and hugely skilled programmers. Free of charge poker internet sites invest massive sums of income in R&D and advertising to guarantee a higher good quality expertise. When you do play totally free poker online one can’t presume that the very same specific methods that implement to a dwell match also implement to an on the web game. So how do you change your game strategy? Nicely, initial you must recognize that the poker algorithms that govern online enjoy are based mostly on a multitude of aspects that never always appear into enjoy in the course of each and every one stay game. The odds online will fluctuate from the odds in real daily life perform, but once you realize this 1 can use this to your benefit. Why is there such a difference in between online/offline probability variables? Primarily to avoid collusion amongst gamers who could sit at the same desk in an attempt to manipulate the taking part in environment for mutual reward. Free poker websites want to make sure that there is a amount playing subject and no two players can override the safety steps that have in spot. As soon as you start to play cost-free poker on-line you will observe variances and flaws in the standard recreation engage in where usually in correct life engage in you would bust on the river that now on-line deal you killer hands. There are many items you have to understand and grasp. A single is the potential to determine what your table placement is and how this may function as an edge or disadvantage. Yet another is what the greatest and worst commencing fingers are. There are a lot of fingers that gamers engage in out and merely do not recognize that the odds are intensely stacked against them correct out the gate. Excellent poker etiquette is always useful to polish your sport. You do not want to be impolite, offensive or bothersome or you will not make several friends when you play poker on-line. Understanding these factors at totally free poker website is one factor, but mastering them is genuinely what helps you grow to be a far better poker participant. On the web poker web sites also provide you a vast reservoir of sources that report, analyze, and critique your game perform to position out flaws in your strategy and support you enhance in locations that otherwise you would be oblivious to in a reside poker game. So when you play totally free poker on the web get gain of these methods and no time you will be getting masses of entertaining in no time. The rising craze of on the web casinos has increased in excess of the previous number of years. Some of the factors are a lot much better game titles and whenever accessibility to these games. If you are a player in the US and want to consider your luck with on the web casino online games, then US casino critiques is one of the very best strategies for acquiring assist. By means of the US casino player critiques you can also get different types of methods and techniques. These recommendations can support you to enhance your odds of profitable. But it must be kept in thoughts that these US on line casino critiques can also offer you with some valuable particulars about hoax casino web sites. There are some cautions which should be exercised even though taking part in these on the internet casino online games. Attempt to an idea of web sites that are safe and dependable by way of US on line casino participant critiques. Below you can locate customers comparing the testimonies of these web sites and there are a lot of forums and blogs by way of which this can be analyzed. Remember, fun88 would be able to get authentic warning and suggestions from knowledgeable gamers right here. Also the customers of these internet sites can show to be useful in this reference. US casino participant testimonials also help you with distinct sorts of poker rooms’ tips. Considering that the gamers cannot be witnessed in on the internet game titles it turns into quite hard to decide their entire body language. With the assist of US casino player reviews you can get more details about the players and whether they are bluffing or what are their odds. With thanks contribution from these US on line casino participant testimonials you can also get more info about the casino sites, which are protected. These days you can come throughout a lot of web sites that use unfair strategies for escalating the odds for the house. This can be done by way of software to manipulate game titles also reduce players profitable. Often these internet sites charge the players excess amounts for their games. Along with these issues use of credit playing cards on these websites can also prove to be dangerous. Hence you need to try and study a lot more of the US on line casino player testimonials so that you can get some real testimonies about casino game titles and internet sites. US on line casino player critiques are also a fantastic way for earning cash via on the web casino video games. These critiques aid you with guidelines and tips about profitable more and how to steer clear of shedding. With normal follow and dedication you would be capable to steer clear of early blunders and thus make sure that you earn more. There are a lot of other message boards and blogs offered, which offer such evaluations. Reserving air journey, creating resort reservations and arranging vacation vacation in common has transformed totally with the introduction of the net and numerous folks consider to be their possess travel agents. While you can set up seemingly most of your vacation your self, you cannot do as well as your journey agent in a prolonged operate! Travel processionals, whether or not your nearby vacation agent, tour operator or location professional nonetheless have contacts that you as an industry outsider do not have. As in amount of other professions, travel agents, no matter whether in a searching centre around your property or an on the web agency, where ever they may be found, do know anything you do not, have way to ebook and organize travel for you in approaches unavailable or unknown to you. Typically you could speak to a vacation agent and question for a quotation, regardless of whether a cost of an air ticket, resort or a vacation deal. For the most element journey brokers nonetheless provide that variety of data, despite the fact that there is a limit how considerably info they could disclose as not all information is easily available to them. First of all, most vacation agents without a doubt might have at their fingertips program cost of air ticketing, resort rates or specified holiday offers offered and will be content to give the price tag details to you instantly when asked. But when your journey ask for will need to be considerably custom-made, whether personalized to your dates of vacation or your other journey tastes, to discover a relevant solution will be time consuming. Simply because of this time aspect associated, do not instantly suppose an company is eager to devote the time to furnish the info you find when there is no motivation you will vacation at all. Search at the circumstances from the subsequent viewpoint. In the outdated times if you had a problem with your auto, you’d push it to your neighborhood vehicle mechanic and questioned him to see what was improper with it. You would drop the auto off at the garage, the mechanic would have a seem and explain to you what the issue was. He would also give you an estimate and it was up to you to decide if you wanted him to fix it proper then and there, wait around or seek out another viewpoint and yet another quote. His companies price you practically nothing. But not anymore. These times, no garage, no auto repair mechanic is inclined to commit time making an attempt to find out what’s the problem with your motor vehicle without having charging you at the very least one particular hour labor upfront. Spend and he will look and notify you. Up to you if you will make a decision to get your vehicle to an additional store or have him correct it, he has protected his time put in diagnosing what’s mistaken with your auto. Similarly, a lot of travel companies and skilled vacation planners and tour operators will cost you an upfront journey organizing payment if you are requesting vacation preparations that first of all are time consuming, or there is no promise you will ebook anything. All you are soon after are essentially non-public tailor-manufactured travel arrangement s and there are no easy answers or possibilities to give you, and the only way to discover out will be for the agent to dig and seek the advice of all sorts of diverse resources he has at this disposal and then current the vacation alternate options to you for you to determine upon. When working with a vacation agent, travel planner or any other journey skilled these kinds of as a well-informed spot specialist, hold in brain that a particular protocol will assure you will get not only the kind of travel arrangements you want in general but also you are going to acquire a correct companion that will usually function in your ideal curiosity no matter whether you may journey absent from property on business or for enjoyment. jlo its my party tour tickets of all, when making contact with a travel agent, whether or not in particular person or online, never wait to give them your title – never fret, most agents will not likely spam you back. With no your identify when you’re asking for a useful journey suggestions most agents won’t just take your request too significantly. Phone if you desire but most agents desire not to just take notes, email is a way to go and for an agent to search up a fare usually a time implies he has to plug in a name, so might as effectively that name will be your true name. If you determine not to take the booking the reservation will expire and no harm done. If you choose later on to buy the reservation the agent does not have to rekey it into the program all in excess of yet again. When We very first read about the Fast Monitor to Weight loss program by simply Kim Lyons I believed that it would be just another celebrity diet which is centered more upon reputation than on real quality. Once My partner and i got a membership to utilise the particular program out, I knew that I was inappropriate. It absolutely was quite clear that a lot of work has gone into building Steps for success Fat Loss a good program that provides a lot of content which in turn can help people change their body and their life. When you begin using the program, you receive teamed up with a personal trainer. This is certainly your get in touch with person together with he or maybe she will there be to assistance you with every question or even issue that a person have. Having this kind of help support and being able for you to consider a professional along with any concern is powerful. It is furthermore a person of the main points missing via some other diet plan plans. Fast Keep track of in order to Fat Damage offers a large amount of content and value. You get a variety connected with workout videos, recipes, healthy software, goal setting plus motivational guidelines and ideas, software to keep trail of your improvement, and a lot more to help help you lose excess fat, get fit, and improve your nourishment and health and fitness. Unlike most diet programs and even books which will give you normal program you need to be able to adhere to, with Fast Observe to Weight reduction, you could make a program which often is suited to your own personal lifestyle, habits, tastes, together with ambitions. The program gives various on the internet tools for you to help you desing a good training and nutrition prepare which is personal so in which greater chance you will like it and adhere to it for some sort of long time. The main challenge with Fast Track Excess fat Loss would be that the website truly does take several getting employed to. As there is certainly so much content, this will take a bit of moment to study how to use this site and find your own way all-around in it. You probably will have maintained to lose weight with just some of the tools it supplies. Total, this is usually some sort of solid software which often can give you almost everything you need for losing weight fast. In the event that you’re looking for a good plan that one could make the own, this is a package worth using. Интер нет-магазин игрушек “Игротека” рекомендует купить каркасный бассейн bestway и повеселить детей увлекательными потехами в воде. Каркасный бассейн станет превосходным сюрпризом на день рождения или к иному торжественному мероприятию. Изделие подарит хорошие эмоции детям, даст возможность насладиться времяпровождением на природе под приятным солнцем в нежной воде. Изготовитель придумал яркий и фантастический дизайн специально для детишек и школьников, причем все модели изготовляются из безопастных материалов. Игротека предлагает купить каркасный бассейн, который отвечает стандартам безопасности и имеет соответствующими сертификатами качества. В онлайн-каталоге представлены бассейны от лучших изготовителей, что подходят для детворы всякого возраста. Каркасный бассейн Intex имеет прочные бортики, сделан он из материала, препятствующего скольжению, в связи с этим вы можете быть абсолютно спокойны за здоровье детей. Подбирать размер бассейна следует, исходя из возрастной группы. ” с абсолютной комплектацией и дополнительными принадлежностями. Стоимость изделий отличается в зависимости от набора. Производители Bestway и Intex обеспечивают высокий уровень защищенности, прочность и долгое время использования. Если мечтаете каркастный бассеин купить на сайте магазина, звоните консультантам, они вам помогут с выбором. Ещё в магазине рекомендуюется купить детскую палатку всевозможных габаритов, которую можно установить на заднем дворе загородного дома или на даче. Палатка детская обрадует ребят разного возраста, помогая играть при разных погодных условиях, в том числе и во время ливня. Интересным способом для празднования дня рождения и забав может стать детская палатка купить которую можно в онлайн-магазине с доставкой по всей Украине. Игровой домик порадует мальчиков и девочек, позволит извлечь кучу положительных эмоций с товарищами. Детская палатка с тоннелем пригодится также для установки во дворе дома или в детсаду, там где родители предпочитают создавать благоприятную атмосферу для своих детей. В зависимости от объемов, в подобных домиках игры могут устраиваться даже с участием взрослых людей, что даст возможность развиваться, забавляться, фантазировать и получать истинное удовольствие. Равным образом в магазине можно купить надувной матрас Intex, который позволит детям научиться плавать, победить страх воды и научиться получать наслаждение от игр в озере, реке или море. Купить надувной матрас можно для ребят разных возрастных групп, на надувном матрасе intex можно как плавать на волнах, так и отдыхать под тепличными лучами солнца на побережье. Товары представлены в различных конфигурациях и оригинальном стилье. Если хотите матрас надувной купить быстро , то Игротека поможет вам. Интернет-магазин реализует надувные матрасы, надувные и каркасные бассейны, детские палатки и прочие приборы для детских развлечений. Вся продукция не опасна, обладает сертификаты качества, продается по доступным ценам от фирмы производителя. Выполняется доставка транспортными организациями в какой угодно город Украины. Для получения консультаций можно связаться с знающими менеджерами, которые ответят на все непростые вопросы и помогут определиться с выбором.
2019-04-24T21:52:40Z
http://www.bearcrawlingnation.com/author/becrgtn/page/42/
One of the more common and persistent algae problems. A Recurring Type Of Problematic, Resistant Algae. Optimizing water chemistry, improving pool circulation and eliminating phosphates!!! When the water chemistry is out of balance, the likelihood of algae growth increases and the growth of sanitizer-resistant strains, due to impaired sanitation, can be the result. A ColorQ, All-Digital Water Tester can perform all of the common pool water tests, eliminates the color-matching and guesswork. There is a model, for every pool testing need. Reliable water testing will help solve and avoid problems. Better Circulation helps make everything work more effectively. The Circulator is a return jet replacement fitting, that improves filtration, eliminates the dead zones that promote algae growth, improves sanitizer distribution and improves heat dispersion. Phosphates and Nitrates can increase the growth of algae and make treatment more difficult, as both are vital plant nutrients. Nitrate removal is not practical, but phosphate removal is easy enough to do. Adding Pool Refresh Total Trap will allow you to vacuum and filter out phosphates and should make algae control more effective. These 3 products all help improve the effectiveness of your sanitizer and reduce costs and improve the water quality. When algae is a frequent problem, it is the result of inadequate sanitation. as well as other factors. Maintaining proper sanitation is a must. Adding some backup sanitizing is important, as chlorine level rise and fall, based on pool usage and chemicals being added. Most pools use some sort of chlorine. A Salt Chlorine Generator is definitely a better way to do chlorine. Salt chlorine generators are highly automated and give you better control. The salt level is about that found in human tears. In-Line and no-installation-required models are available. Adding a Solar-Powered Pool Mineralizer introduces copper and zinc ions, that provide algae control and additional backup sanitizing. This reduces the amount of chlorine required, to maintain an optimum level. A Solar UV Sanitizer creates "free radicals" which help destroy algae and other microbes and provides some sanitizing backup. It doesn't completely replace chlorine, but will provide better results and reduces chlorine usage. How to treat Yellow Mustard Algae, a resistant form of swimming pool algae? Mustard algae usually appears as a yellowish-greenish-brownish powdery deposit on the pool walls or bottom. It seems almost "pollen-like" and can be easily brushed off the walls. This troublesome algae will respond to treatment, however, it may require several steps to eliminate it completely. In many cases, a regimen of treatments is required to eliminate and control this sanitizer-resistant algae. The algae problem will frequently return, if the sanitizer level, water chemistry and pool water circulation are not properly maintained. The addition of a Nano-Stick Pool Clarifier can help the filter remove the dead algae and organic debris more quickly. In addition to proper sanitation, good circulation is a must to help prevent algae growth in areas with stagnant water or dead zones. The use of The Circulator, as a replacement for standard return jet fittings, can dramatically improve circulation, better distributing sanitizer to all areas of the pool. Adding a Dual-Ion Solar-Powered Pool Mineralizer will provide some backup algae control, especially important when chlorine or bromine levels bottom out. It can buy you some time, until the chlorine or bromine levels, can be replenished and restored to optimum conditions. Should problems arise, refer to the Pool Problems Page, as a source of problem-solving information, broken down into various categories. Scroll down the page and click on the linked keywords, catch phrases or images, in the archived answers below, to access additional information, on that topic or product. I am sure that I have mustard algae. It is a yellowish-green color and does brush off the walls easily. I can get rid of it by shocking heavily. A couple of weeks go by and there it is again. I have heard the copper algaecide will work, but I have a aggregate finished pool and would rather not use copper. Any other suggestions. Your description can be that of mustard algae. It can be treated with other than copper algaecides. You might have a two-fold problem. One part is that your sanitizer level, chlorine I assume, is probably not being maintained adequately at all times. Make sure that you maintain a 1-3 PPM level of Free Chlorine, at all times. Do this and it is unlikely that you will see the mustard algae return. If the problems starts in certain areas, redirect the return flow to improve the water flow, in that area. Adding a circulation booster, such as "The Pool Circulator", will improve the dispersal of chemicals and dramatically improve circulation. It's simple to install. For more information, please click here. One of the best products to use for mustard algae is one of those "Yellow" Products containing sodium bromide. Used in conjunction with a shock treatment, it will generate bromine, which seems to be especially effective against mustard algae. It's important to test for Free Chlorine, when shocking a pool. Make sure that you add enough product and it is added frequently enough to boost the Free Chlorine to 5-10 PPM. You want at least 1-3 PPM persisting through the over night period. Do that and there should a major reduction in the mustard algae, by the morning. Keep the filter operating continuously, until the problem is controlled. You didn't mention if you have a robotic pool cleaner. They are very effective at cleaning and improving the water circulation on the bottom and can help remove some of the powdery mustard algae. Improving circulation, in the corners, will help prevent a recurrence. If this website was helpful, in solving your problem, please consider joining our E-Letter Mailing List. You'll receive E-Letters, with helpful information, new product updates, suggestions and sale announcements. I hope that I have provided the solution. ► Coping With Mustard Algae? This is our second season in having a 20X40 inground vinyl liner pool. We are having extreme difficulty this year with mustard algae. We got mustard algae at the beginning of the season. Our pool professional recommended we use the particular copper algaecide because it had less copper than other products he carries. We treated the pool once and realized that the shock we were adding to the pool, was dissipating very quickly. Less than 24 hours later, it would test that it had no free chlorine. Our pool professional came by and tested for nitrates and found that we had nitrates in the pool. This problem was easily remedied by renting a submergible pump and pumping the pool down as far as possible (keeping the liner in place) twice. We again tested for nitrates and it said we had remedied this problem. We again were told to add the algaecide, which we did. They also told us to put in 4 lbs of shock. After 24 hours, we were told to vacuum to waste. Which we did. Within 24 hours the mustard algae was back, so we repeated the treatment. We were told to not add any more shock, as this might combine with the copper in the algaecide to discolor my liner. Again , I vacuumed to waste. After this treatment (the third one) on the mustard algae, I was still seeing the sand like material on the bottom of the pool. I thought I had not vacuumed to waste very good, and was told to go ahead and vacuum and add metal treatment (4 quarts) to remove the copper (it was at 0.6 PPM) Before adding the metal treatment , I once again vacuumed to waste and added the metal treatment. I went back today (24 hours later) and the copper was still at 0.6 PPM. I was still told not to add shock, even though my free chlorine is at 0.9. They gave me two more bottles of metal treatment and told me to have it tested again in 48 hours. I also am still seeing obvious signs of mustard algae. My question after all this is twofold 1. Why isn't my algaecide working, 2. What do you do about the copper. My pH tested today at 7.2. Please help. Let's say that you do have mustard algae! Just because your water tests positive for nitrates, doesn't mean that it is inevitable that you will have mustard algae problems. The testing for nitrates by dealers is not a common practice. Granted, the nitrates in the water is not a benefit, but it is not the end of the world, nor is it necessarily a reason to pump the pool out twice. You may not be able to remove nitrates, but you can remove phosphates, which is the next best thing. Like nitrates, it is a vital plant nutrient. POOL REFRESH makes it easy. The algaecide that you added is used to control mustard algae. If the product contained less copper, it was offset by requiring you to add more product. There is no benefit, so far as the copper dosage is concerned, if you followed the label. You can add all the metal treatment that you want and the copper reading will not decrease. The copper is in a chelated or stabilized state and it will remain in the water indefinitely. The addition of the metal treatment probably has diminished the effectiveness of the copper to deal with mustard algae. It seems apparent, in this instance, that copper has not worked, so let's try something else. Try using a sodium bromide product. It is sold under several different names - check the ingredient statement. Use this product in conjunction with a shock treatment. Make sure that you keep the Free Chlorine in the 5-10 PPM range for at least an overnight period. It may take more shock than you think to accomplish this goal. The longer you take to build up the Free Chlorine level - the longer the algae will continue to grow. Keep the filter operating and use the brush on the walls and bottom. A pH closer to 7.2 is a benefit during this period. This regimen should work quickly, if you keep the Free Chlorine level elevated. You don't have to do anything about the copper - certainly don't add any more metal treatment! Adding a dose of a Blue Clarifier will help remove the dead algae. After the water clears, backwash or clean the filter and resume normal pool operation. Clarifiers can interfere with some filter media. For that reason, you might consider adding a Nano-Stick Clarifier, which works 24/7 and does not add chemicals or affect filters. It is 21st century technology. The recurring nature, of the problem, could be indicative of dead zones and poor circulation. The Pool Circulator is a circulation booster insert, that dramatically eliminates dead zones and makes the water come alive. You'll get better distribution of sanitizers and that should help minimize algae and other related problems. I hope that I have been helpful. I service pools and have several that are painted ( both rubberized & epoxy ). In our summer heat, I notice much more problem with yellow algae in a painted pool compared to plaster. I keep the alkalinity higher in a painted surface but some chalking does exist with the heat and chemicals. Is there any explanation as to why these pools show yellow which seems to cling to the surface? Also, any suggestion as to a product that will help? Thank you very much. So far as I know there is no correlation between yellow-mustard algae and the type of pool or pool finish. It seems to be an equal opportunity problem, that appears when conditions are favorable. High pH will reduce the effectiveness of chlorine and could be associated with high TA. This problem is one of resistance to normal chlorine levels and even resistance to copper. The treatment that seems to be most effective is the addition of a sodium bromide product and shock treatment. This will convert the chlorine to bromine, which seems to be more effective in certain circumstances. For free chlorine testing, I suggest using one of the ColorQ Water Analyzers, as they reliably, provide the right kind of information. To confirm proper overall pool water chemistry, visit a pool store that has a very reliable, professional lab such as a WaterLink SpinTouch Lab, as opposed to a less accurate test kit or strip reader. I hope that this information will prove helpful. ► Questionable Use Of Copper Algaecide? My pool shop has given me a copper algaecide, as I have noticed a small area of mustard algae in one area of my pool. As part of my pool system I operate a mineral purifier. Am I correct in thinking that these are not compatible because the algaecide is copper based and should I use another product? That product is an appropriate and frequent choice for mustard algae. However, it was wrong to recommend it in your case. The particular mineral sanitizer system that you have contributes copper ions to the water, as part of the sanitizing process. Therefore, the mustard algae was already growing in the presence of copper ions and another type of treatment should have been suggested. It is not a compatibility issue. I suggest that you shock the pool and add an initial dose of a polymer algaecide, as this is chemically different and compatible. Another type of mustard algae treatment, based on sodium bromide, cannot be used, as it will shorten the life of your particular mineral sanitizer cell. Another possibility is that your mineral sanitizer cartridge is exhausted and is no longer contributing copper ions to the water. If that is the case, adding copper algaecide is appropriate can help jump-start the treatment. Solar-Powered Dual-Ion Mineralizers release copper and zinc and can be used in pools that contain bromine or bromides. BETTER CIRCULATION CAN SOLVE A HOST OF PROBLEMS. The Pool Circulator is the easiest way to improve circulation and eliminate the dead zones, that promote algae growth. I hope that this information proves helpful. Hi Alan, first of all I'd like to say, "great website." It's been helpful. My question is this. I have about 10 or so pools on my route that I have mustard algae problems with every year. I have found that sodium bromide treatment works wonders, but it is so expensive to try to dose 10-15 pools for the whole season. I had an idea to install in-line chlorinator's on these problem pools and run chlorine (liquid and tabs in the pool itself) while introducing 1" bromine tabs to the pool through the feeder. Would this achieve the same result? Why or why not would it work? Do you know where a person can purchase liquid sodium bromide from an alternate source? Thank you. Your intentions are two-fold. Obviously you want to continue doing what is needed to properly maintain these pools and, at the same, there are economic realities. You should never add bromine tablets to anything other than an approved brominator. I believe that your intent is to use bromine tablets to introduce bromine on a continuous basis. Ultimately, how much bromine can be present depends on the bromide content, as well as the chlorine level. Bromine tablets are a costly means of boosting the bromide reservoir. If you want the bromides to convert into bromine, use sodium bromide as the source. It is not a case of not working, just too expensive and is not likely to perform in the same manner as sodium bromide and a shock treatment. A gallon of sodium bromide solution has a content of about 4 pounds of sodium bromide, at maximum. The rest is water. Based on your costs, which is more economical? I have no information available on alternative sourcing. I suggest that you maintain a higher chlorine level during the most problematic times, as this is probably necessary, due to the fact the bromine formed is more susceptible to the Sun's UV rays, You may be able to get by with a single dose, of sodium bromide, as it does not leave the water, after conversion to bromine. LaMotte offers a Bromide Test Strip, that you can use to monitor the bromide level and know when more sodium bromide should be added. Make sure that the pH is 7.2-7.6, as higher values will decrease effectiveness. Stabilizer level is another factor to consider, as it serves little or no purpose, once bromides have been added. It does not help protect the bromine, from destruction, due to the Sun's UV rays. I hope that I have been helpful. I e-mailed this question to another web page and I have not heard back please help. Hi, my name is Darin. I have a 35,000 gal. vinyl liner, inground pool. I have a 300 lb. sand filter and an automatic chlorine feeder ( 3" stabilized trichlor tabs). My filter runs 12hrs. a day. This is the 7th summer I have had my pool. Up until now I have never had any real problems with my pool water chemistry. I have what I believe is Mustard Algae and I can't get rid of it. I maintain approx 3-5 ppm chlorine, 7.4 Ph, 180 TA. For 3 years I have been using a 4 in 1 Shock that does contain a stabilizer. As a result my cyanuric acid level is 240 ppm. I have been told by a few stores that almost any level over 40 is o.k. and not to worry about it, and that excessive stabilizer does not cause or promote the growth of algae. Is this true? On approx. Monday August 4th, 2003 we started to notice what looked like it might be sand on the bottom of the pool. It would lay in any depressions in the liner and on the steps, again it would lay in the depressions. We started to vacuum the pool daily only to find the next day it would look the same. At this point the algae would vacuum up very easy and would cloud up if you waved your hand near it. After several days of this I ruled out sand or any other foreign debris in the pool. On Friday August 8th, 2003 the pool got cloudy, whitish colored, and still produced the same amount of algae every 12 to 16 hrs. I shocked the pool with a 4 in 1 shock according to product label for heavy algae growth, 2lbs per 10,000 gal. I put 6lbs in 35,000 gal. This raised my chlorine level well above 10ppm. On Sunday August 10th I added 32oz. of a copper based algaecide and had the pool water tested. pH and TA were o.k. Calcium level was low, I adjusted to a proper level. Copper level was 0.2 ppm. Chlorine above 10ppm. The algae growth seemed to slow for a couple of days. We continued to brush and vacuum daily. I also cleaned and changed the sand in the filter. On Thursday August 14th the algae still seemed to be growing steadily. I was instructed, after testing my water( chlorine over 10ppm, stabilizer 240 ppm, everything else o.k.) to shock the pool with calcium based chlorine, circulate 1 hr. and add 2-32oz. bottles of copper algaecide. I shocked the pool and 45 min. later the power went out. We brushed the pool several times that night and the next morning. 9am the next morning the power was back on. I checked the chlorine level, above 10ppm, and added more copper algaecide. The calcium based shock made the pool very cloudy, this took several days to clear up. On Saturday August 16th the algae still seemed to be growing at the same rate. I called the dealer. They told me that because I didn't add the copper algaecide 1 hour after shocking that it wouldn't work and I needed to do it again. I shocked the pool again with a sodium based shock and added 2-32oz bottles of copper algaecide. At this point we are still vacuuming every day. And always the next day the algae is back. As this problem has progressed the algae has become more difficult to vacuum up and now grows on the front of the steps and is much more wide spread in the pool. I got in the pool with a mask to look at the algae. It still will cloud up and it feels slimy between your fingers. When then algae first starts to appear it looks yellow, like sand on bottom and like a film on the front of the steps. As the algae gets thicker on the bottom it seems to get a whitish cloud over it. And then after about 24 hrs it looks like sand again, only it looks brown. The pool store tells me that with my copper level at 1.0ppm and my chlorine level above 10 ppm that algae can't still be alive. That it must be dead and it is just to fine for my sand filter to filter out. They tell me to use a filter aid and vacuum algae to waste. Tuesday August 26th.. The pool is quite clear, however the water has a definite green cast to it. And after vacuuming to waste for 3 days and using a filter aid the algae still forms on the front of the steps like a yellow film. And collects on the bottom in all the depressions. It is now suggested that what may be in my pool is metals falling out of solution due to the fact that my chlorine level is above 10ppm and has been for 3 weeks. They are now suggesting that I neutralize the chlorine down to 5ppm and stain and scale inhibitor to remove the metals. Then after 48 hrs add filter aid to clear up pool. Please tell me if there could be another cause for this apparent algae growth. I'm not sure it is metals falling out of solution mainly because it does not feel gritty. It feels quite slimy. And it seems to grow in the exact same places and in the same shape every time. It also seems to grow evenly through out the shallow end out the pool regardless of the amount of circulation in that area. The only place that it seems not to grow is in the deep end, almost like there could be a thermal layer and the algae doesn't like the colder water. Thank you very much in advance for your time and advice. I truly hope you can provide some information on this issue. Interesting letter. The slimy feeling is positively due to algae or other microorganisms! Sand filters can fail to remove dead algae effectively! Your cyanuric is way too high and you need to replace water! The algae is probably resistant to normal levels of chlorine, as high cyanuric acid levels cause the chlorine to act as if the level is much lower, than the test shows. The slime could be bacterial and copper probably will not be effective! So where to you begin? First off, I suggest that you replace water in order to lower the cyanuric acid to below 100 PPM. This will help make the chlorine more effective and lower the concentration of copper and other metals. Once the water level is restored and the cyanuric acid level is below 100 PPM, shock the pool water and boost the Free Chlorine level to 10 PPM. At this point the chlorine should be more effective. DO NOT SHOCK WITH ANY PRODUCT CONTAINING A STABILIZED CHLORINE: using such a product will only speed up the rise in stabilizer levels. Use liquid chlorine, lithium or calcium hypochlorites, as shocking agents. Keep the pH close to 7.2 in order to make the more effective. Going forward, in order to avoid cyanuric acid buildup problems, you should consider switching to a salt chlorine generator. It will provide more control, better results and eliminate the buildup problems. Backwash the filter to waste and add 1/2 pound of DE to the skimmer with the filter running. This will help improve the filter efficien cy and make it better able to remove dead algae. Adding a dose of a blue clarifier the day after shocking is a good idea. It coagulates dead algae for easier removal. Instead of an ordinary clarifier, you might consider adding a Nano-Stick Clarifier. It is a 21st Century technology, that works 24/7 and can last up to 6 months. Make sure that the Free Chlorine/Bromine remains high, until the problem is solved. Redirect the returns to send more water towards the areas that are most affected. Adding a polymer algaecide, if necessary, might be another worthwhile step. Give the filter a day or so, with the elevated levels to make a difference. I hope that this all works out for you. You seem to have gone through the wringer. BETTER CIRCULATION CAN SOLVE A HOST OF PROBLEMS. You can instantly get better circulation and chemical distribution, with The Pool Circulator. Simple to install. Let me know how it turns out! How to get better control of yellow/mustard algae problems. Purifier/Mineralizer, uses copper and zinc ions. Treats 32,000 gallons. The Pool Circulator eliminates dead zones, improving sanitizer action. Easy installation. ColorQ Digital Water Analyzers eliminate all the color-matching and guesswork. Easy to use. ► Mustard Pool Algae Woes? I discovered your website last evening and you have an array of information. Thanks for helping to educate us. My situation is as follows: we had an in-ground pool built last February. The pool was installed with a salt chlorine generator and an automatic pool vacuum to make life easier for my husband and I. Since the completion of the pool, we have had a problem maintaining adequate chlorine levels. It comes in spurts. We are aware that after rain, we may have lower chlorine levels, but the inadequate levels are also there when there hasn't been a lot of rain. The other chemicals (pH, calcium, stabilizer, salt, etc.) are being maintained correctly. My husband checks the water weekly and brushes the pool and cleans the filter weekly. My husband has tried the approach of cleaning the generator's cell, but the pool still doesn't maintain adequate levels of chlorine. We are usually putting in chlorine on a bi-monthly basis. We even had a rep for the generator company come out and he informed us our chlorine generator is producing chlorine. The generator has consistently been on 100% boost. Due to the chlorine problem, we are continually battling an yellow-orange powdery residue on the walls and stairs of our pool. I believe it is mustard algae from lack of chlorine (when the readings are low). I also notice the stairs and bottom of pool feel slippery. When the generator company rep came out, he informed us we have a high level of phosphates in our pool. We weren't aware we were to check for phosphates. The pool store that checks our water does not check for phosphates either. We later found out the store will check for phosphates if requested. Anyway, the rep told us to use the phosphate treatment program he provided and this should correct our problem with phosphates, chlorine and mustard algae and then our pool should maintain adequate levels of chlorine that are produced by the generator. We treated with the phosphate treatment and after re-testing, we still had a high level of phosphates. We did a second treatment and just re-tested yesterday and the phosphate level is still at 500 ppb. I am losing hope with pool maintenance. We got the salt generator so we wouldn't have to continually have to add chlorine, but we still have to add chlorine. We treated for phosphates, but it isn't going away. Our pool has mustard algae. The bottom is slimy. Any suggestions? The fact that you have mustard algae and slime on the walls, implies that the demand for chlorine is very high. Under these circumstances, it appears that your salt chlorine generator is not able to produce enough chlorine to maintain a proper Free Chlorine level. It is a matter of playing catch up. The phosphates are not helping the situation either: they act as a fertilizer and promote algae growth. Adding a phosphate eliminator, such as POOL REFRESH, was a good thought. However, 500 PPB may still be too much. To be effective you must lower the level closer to zero. Once you level the playing field and get rid of this backlog of algae and slime, it should be easier for the salt chlorine generator to keep up with the chlorine requirements of the pool. Step one should be to treat again for phosphates. Step two, should be to add sufficient chlorine to boost the Free Chlorine level to 5-10 PPM and keep it there long enough to destroy the algae and slime. It may take a lot of chlorine to do this and the longer it drags out the more chlorine will be required. As long as it is not dead, it will continue to grow. Step three should be to add a treatment for mustard algae. You can use either a copper algaecide or a sodium bromide product. Both seem effective. Check with the salt chlorine generator dealer, as to their preferences for a mustard algae treatment. For a salt chlorine generator to function properly, the salt level must be maintained with specified ranges. A Salt PockeTester can be used to test the salt level. It is easy to use and covers the broadest range needed. I hope that this information will prove helpful. Good luck. ► Is It Mustard Pool Algae? Alan, I have been told that I have mustard algae. I first discovered it back in Dec. of last year. The pool was installed in Oct. of the same year. It is a 27 foot round above ground pool. Before I was told that It was mustard algae, I would vacuum it up through the filter. I did this several times before I was told to vacuum on waste. Its looks like sand on the bottom of the pool, but acts like a real fine powder when the vacuum gets close to it. If I don't run the pump, then there isn't much on the bottom of the pool. But when I turn the pump on it really shows up. I have treated it with a bottle of Yellow product and some copper algaecides. I also shocked the hell out of it with chlorine. I have taken everything out of the pool (including the steps) and it still comes back. I still have it and don't know what to do. Help. I have read all your letters concerning mustard algae and pretty much have tried everything you suggested. Help. Thanks. If you really have mustard algae it should respond. Make sure the following is done. Boost the FREE CHLORINE reading to 10 PPM an d keep it there until the problem is solved. Make sure that you are testing for FREE CHLORINE! Keep the filter operating continuously, until the problem is solved. Try and direct the water flow into the most affected areas. Add a dose of a Yellow Sodium Bromide product. Use the pool vacuum and brush to clean the corners and pool bottom perimeter. Drop the pH to 7.2. This will help increase the effectiveness of the chlorine. This treatment should be effective, if what you have is mustard algae. Let me know how things turn out. BETTER CIRCULATION CAN HELP SOLVE AND PREVENT THIS TYPE OF PROBLEM. The Pool Circulator is a most effective way to achieve better circulation and chemical distribution. Good luck and I hope that this information will prove helpful. ► What Kind Of Algae Is It? After reading a lot of the problems people are having with mustard algae, I'm not so sure if the algae I have is mustard algae. My algae looks nothing like sand. I called my local dealer and described the algae as a green or a late green substance that seems to look puffy and is very easy to vacuum up. The dealer informed me that I have mustard algae and that I should treat it with a mustard algae product. The algae in my pool does not look anything like sand, that's for sure. If I approach it to quickly with the vacuum cleaner head it will explode only to settle later. How does one determine what kind of algae they are battling? Mustard algae is simply a non-scientific term for variations of the common blue-green algae. It is less important to identify the algae than it is to eliminate the problem. The fact that it is "powdery" is good enough for me. Boost the FREE CHLORINE level to 10 PPM and keep it elevated, until there is improvement. Add either an initial dose of a chelated copper algaecide or a dose of a 60% polymer algaecide. To avoid a recurrence make sure that you test for FREE CHLORINE and maintain it within the 1-3 PPM range. Use a reliable tester, such as the ColorQ PRO 7, which eliminates all color-matching and guesswork. Try and maintain good circulation, as lack of proper circulation aids the growth of algae. Replacing your existing return jet fittings, with The Circulator, will dramatically improve circulation, by creating a spiraling return flow, that reaches throughout the pool. The addition of a Robotic Pool Cleaner can help greatly in improving bottom circulation and eliminating algae-prone dead-zones.. Good luck and I hope that I have been helpful. ► Mustard Algae Not Green Algae? A week or so ago, you helped me identify a pool water problem that I had been fighting for several months. You advised me that I should be killing "mustard algae" (not the green algae that I thought was my problem and so did several local "experts"). Thanks to your expertise and following your instructions, I now have a clear, algae free pool. You were absolutely correct, my problem was mustard algae NOT green algae. You deserve more than just a thank you, but that is about all I can pass along to you. Thank you. Thanks for the follow-up. Glad to hear that everything cleared up. Yellow mustard algae can be a tough one, especially, if you are not familiar with the problem. So don't be too tough on the "locals." Enjoy the summer. Lately, my pool has developed a powdery stuff that is yellowish and looks like pollen. Only thing is I don't have any trees or plants near the pool. Could this be algae? My pool is a 15' X 30" X 4' above ground pool. What should I do? What you are describing could be mustard algae. The problem can be treated easily enough and with some maintenance shouldn't return. Brush all the walls and the bottom and keep the filter operating continuously, until the problem is eliminated. Add a quick acting shock: liquid chlorine, calcium hypochlorite, sodium dichlor or non-chlorine shock, at the rate of 2 pounds per 5000 gallons of water. After a few hours test for Free Chlorine: make sure that you are using a Free Chlorine Test Kit! Repeat the additions, at the rate of 1 pound per 5000 gallons, until you are able to maintain a Free Chlorine level of at least 1-3 PPM, for an overnight period. At this point all the algae should have been destroyed and normal chlorination can be resumed. If the water is cloudy, add a dose of a "Blue" Clarifier. A Nano-Stick Clarifier can be more effective than a standard clarifier and can last up to six months. Safe with all types of chemicals and filters. Copper very effective in controlling mustard algae. I suggest adding a dual-ion, solar-powered pool mineralizer, that adds copper and zinc ions. These steps should help keep your above ground pool algae free, but you must maintain a proper chlorine level to keep it that way! I hope that I have been of assistance. Enjoy the summer. ► Mustard Algae Pool Problems? How's it going. I've been in the commercial pool industry for almost four years now and this year by far has been the worst hit with a lot of companies here in Atlanta with the Mustard Algae problem. We deal with problems obviously all the time and would like to think we know how to deal with all of them or at least know someone that can. Mustard Algae remains a problem. Yeah, now we are using a new product that specifically treats the Mustard Algae, and proper water chemistry has always been maintained throughout. My question is where is this form of Algae originating and can it be totally wiped out or is it an ongoing battle. It obviously becomes an expense issue over and above set budgets. But also it's time consuming with the cleanup and generally a pain in the proverbial, if you don't mind me saying. Any insight towards the subject would be appreciated. By using the yellow treatment every time a pool is shocked, can that do any harm even if no algae is present. Thanks for you time Alan, very interesting website. Mustard algae is a variety of the common green algae and is present in the environment. The problem begins, when it becomes resistant to the normal levels of chlorine. There are two popular yellow treatments: one based on sodium bromide and the other based on ammonium sulfate. Both seem to work, although in different ways. The sodium bromide product does have a residual action - the generation of bromine. The ammonium sulfate product has no continuing effect, once the treatment has been completed. No harm can be done to the pool or person, if little or no algae is present. If the pool is vinyl, serious thought should be given to using a chelated copper algaecide. It is not popular in your neck of the woods, but it is very popular across the country. The use of a robotic pool cleaner can help deal with the conditions that can lead to mustard algae problems. Improving circulation and acting as a micro-filter are some important advantages of this type of pool cleaning product. The addition of The Pool Circulator can help improve circulation and chemical distribution and eliminate dead zones. I hope that I have been helpful. ► Mustard Algae And No Chlorine Level? Your site is very helpful, thank you so much. I have a question regarding mustard algae. I treated the pool with 4 lbs. of Yellow Out and 4 lbs. of shock. It did not clear so I added another 4 lbs of chlorine 8 hours later and another 4 lbs. 8 hours after that. The pool is not holding the chlorine and the water is still cloudy green. I haven't vacuumed yet or cleaned the filter. Should I try these two steps or do I have to wait until the pool is clear. The algae has diminished significantly although traces of it does keep reappearing on the steps. I am brushing throughout the process. Thank you. The "Yellow" ammonium sulfate products work by converting chlorine into chloramines, a form of combined chlorine. It is not what I usually recommend, but it can work. The problem. now, is that you need to add lots of chlorine - 10 PPM for each 1 PPM of chloramines - i n order to destroy the chloramines and decompose the algae. Once you get a stable free chlorine level, the mustard algae should be eliminated. I suggest that you add the liquid chlorine or quick dissolving shock, about 2 pounds/gallons per 5,000 gallons, until the free chlorine level is over 5 PPM. Don't drag it out! The longer it takes, the more product will be required. Keep it there until the problem is under control. Check the overall water chemistry as well. Have the water tested for phosphates and nitrates, as their presence could promote algae growth and increase chlorine usage. If phosphates are present, you can remove the phosphates, by treating the water with Pool Refresh, which is a 2-part system, that allows you to filter or vacuum the phosphates out of the pool water. Make sure that you are testing for FREE CHLORINE. A product, such as the ColorQ all-digital water analyzer, provides the right kind of information and is ideal for this purpose. Adding a periodic dose of a copper algaecide might help prevent a recurrence. Otherwise, if it returns try adding a 60% polymer algaecide. Poor circulation can make algae growth more likely. You might consider adding THE POOL CIRCULATOR. The easy to install device will eliminate the dead spots, that can promote algae growth, by creating a spiraling return flow, that reaches throughout the pool. Better circulation cures a lot of problems. I hope that I have been helpful. I have a 27' round above-ground pool. I CANNOT seem to get rid of the mustard algae problem I have. I had my water tested, and the metals were extremely high (due to well water being combined with city water when it was originally filled 2 years ago). To bring these levels down to normal, I had to add a total of 5 bottles of metal treatment over a one week period. The pool company suggested taking care of this problem first. I have had this algae problem since the pool was first set up, but have always had good chlorine readings. Now, I have NO chlorine. I have used a sodium bromide, with up to 5 lbs of shock (on 3 occasions), and still have the algae, and no chlorine. I have a brand new cartridge filter. I brush and vacuum all the time. All this time my water has always been crystal clear. I use a chlorine floater with 3" slow tabs, AND add the one a day fast dissolve tabs daily. Is there another way to get my chlorine back? Did the metal treatment take it away? I am in need of serious help with this continuing problem. Thank you. Adding the metal treatment was the right thing to do. Controlling the metals should come first. The metal treatment did not inter fere with your chlorine reading. I would not suggest that you use a copper algaecide to treat the mustard algae. You have enough of a metals problem and the metal treatment could interfere with the copper algaecide. If the source water contains iron and other metals, you can avoid adding to the problem, when new water is added. Simply attach a METALTRAP Filter to the garden hose and it will remove metals, as water is being added to the pool. The sodium bromide product that you added should help control and eliminate the mustard algae, BUT, only if you maintain a suitably high level of chlorine. Not having any chlorine is indicative of the fact that the chlorine being added is converting to bromine, by oxidizing the sodium bromide. In turn, the bromide gets destroyed, by the Sun's UV rays, so the level always appears low. I suggest that you add some liquid chlorine, after the Sun goes down, as that will allow the bromine generated to last through the night and into the early part of the next day. You might also consider using a 60% polymer algaecide, as it is not copper-based and will provide some backup sanitation, throughout the day. I hope that this information will prove helpful. Good luck and enjoy the pool season. I have a question. We had a small problem with mustard algae. We went to our pool supply and brought a water sample and they sent us home with some copper algaecide. Now our less than 1 year old pool has a blue something all over the bottom and stairs. We went back and they gave us a mineral remover. I don't know what to do now. Help. The "blue" something could be copper. If the copper algaecide was a chelated copper formulation, it would be unusual to cause staining, unless the pool water chemistry was far from optimum. The product that you added is used to help control heavy metal staining. I doubt that it will remove the stains just by the simple addition of the product to the pool water. It will probably be necessary for you to drop the pH of the water to approximately 7.0 and add MetalTrap Stain Remover. After the stains are removed, follow with the addition of Pool Refresh, which will allow you to filter or vacuum the stain-causing metals out of the pool. Lastly, add a dose of Liquid MetalTrap, to scavenge up ant lingering traces. All 3 MetalTrap products are contained in the MetalTrap Stain Reversal Kit, which should be used, as directed. There are other means of controlling mustard algae that do not involve a copper algaecide. Judging from your experience, you might want to try a 60% polymer algaecide, which contains no metals. Better circulation will surely help and you can easily and dramatically boost circulation. By installing The Circulator, in place of the standard return jet fittings, you create a spiraling return flow, that reaches throughout the pool. I hope that this information will prove helpful. I have a recurring problem with mustard algae. I have followed some of the recommendations that you have provided under the yellow mustard algae topic. The problem does seem to be under control. My question is, do you think that an automatic pool vacuum will make a difference? The pool is a 16' x 32' inground pool. Thanks for the help. It certainly won't hurt. A robotic pool cleaner will help improve water circulation across the bottom and all of the nooks and crannies. And that's where algae tends to gain a foothold. In addition, mustard algae tends to be powdery and the pool cleaner should help remove it from the underwater surfaces. Did I mention that it will save time and effort? Good luck with your decision. I think that I have a greenish mustard algae. It can be vacuumed and brushed quite easily. Shocking the pool does seem to help. The problem is that it comes back again and again. My pool is an 18 X 36 foot vinyl lined pool. What products are best to use so that I can avoid this problem? Thanks. Mustard algae can be treated in two effective ways and, in your vinyl pool, both are good. Chelated Copper Algaecides are usually effective, in controlling this type of algae. The chelated types of copper algaecide will require additions every week or two and this will certainly help, in your case. Your sanitizer level, chlorine I assume, is probably not being maintained adequately at all times. Make sure that you maintain a 1 -3 PPM level of Free Chlorine, at all times. Do this and it is unlikely that you will see the mustard algae problem returning, with any regularity. If you don't have an automatic pool cleaner, consider adding one. These cleaners are quite affordable and are very effective at cleaning and improving the water circulation on the bottom. In the case of your above ground pool, it can act as a main drain, while operating. Another effective treatment for mustard algae is the use of a 60% polymer algaecide. When shocking a pool make sure that you add enough product and it is added frequently enough to boost the Free Chlorine level to 5-10 PPM. Make sure that the pH is 7.2-7.6. Try to maintain at least 1-3 PPM, through the overnight period. Keep the filter operating continuously, until the problem is controlled. Once the problem is controlled resume normal chlorination and filtration. Poor circulation creates dead zones that promote algae growth. Better circulation assures better distribution of the sanitizing chemicals and makes algae problems less likely. The Pool Circulator is an easy-to-install device that will dramatically improve circulation and eliminate any dead spots. Going forward, I suggest switching to a salt chlorine generator. It provides better results and provide more control, while eliminating the negative effects of chlorine use. I hope that this information will prove helpful. In treating mustard algae, is it also recommended to treat pool toys, floats, vacuum, even bathing suits? I've been told many different opinions and don't want to have the mustard return. Treating the pool accessories certainly can't hurt. But, by itself it will not prevent a return of the problem. The pool water and conditions must be maintained so that it is unfavorable to mustard algae growth. Pay attention to the Free Chlorine levels and the water filtration and circulation. Stagnant water will cause problems. Redirect the return flow into any area that seems to be prone to the problem. I hope that I have been helpful. ► Yellow Algae Stain or Metals Stain? Alan, I have an inground pool with a volume of 15000 gallons. I have treated what I believe to be a yellow algae problem with twice, along with the recommended steps necessary for this yellow product to work. The algae still exists on the side of my pool and does not scrub off with even a brush. Is there something else that I can use to get rid of this problem? You didn't say if the product was helpful. Yellow mustard algae brushes very easily. Either it is another type of algae or it is a mineral stain, possibly iron. Try this. Put 1/2 pound of pH reducer powder in a white sock, tie on a rope and hang over the side of the pool against the stained area. Check after fifteen minutes. If there is improvement, it is definitely a metal stain. To treat the stain, I suggest that you use a MetalTrap Stain Reversal Kit, which contains everything required to dissolve the stains, eliminate the metals from the water and help prevent a recurrence. If the sock treatment did not work, I suggest that you try using a 60% polymer algaecide. Boost the Free Chlorine reading to 10 PPM and use the scrub brush. Redirect the returns to send more water towards the affected areas. BETTER CIRCULATION CAN HELP SOLVE AND PREVENT THIS TYPE OF PROBLEM. The Pool Circulator is a most effective way to achieve better circulation and chemical distribution. I hope that this will prove helpful. ► Mustard Algae And Biguanide? Alan, I have a 24 foot above ground pool that is 3 years old and I am constantly fighting what I am told is a mustard algae. It appears as a yellowish color that almost looks like sand laying on the bottom and I also get it on the sides and behind the ladder. I use biguanide instead of chlorine because of my wife's skin allergies so switching to chlorine is not really an option, but I can't seem to get rid of this problem. Do you have any suggestions? I have had the water tested and the levels all look good and I use the premium algaecide. Thanks. You may have a problem! I know that is not what you wanted to hear, but the best products for use against mustard algae cannot be used in a biguanide pool. So unless your dealer comes up with a proven recommendation, I think that you will have to switch to chlorine at least temporarily. In fact, you should re-evaluate the situation and decide if you really want to stay with biguanide. Based on the emails I receive, you will have to switch, sooner or later, and now seems lie the right time. Before trying chlorine, you might try adding a polymer algaecide. This material will register on the test kit, as biguanide. Make sure that the biguanide is raised to 50 PPM, before adding. Retest, after adding the algaecide. This increase is due to the algaecide and should be considered, when retesting over the next month or so. If this fails, you should consider switching, away from biguanide. Using salt chlorine generator would be the best way to utilize chlorine. It will provide better results and more treatment options. The switch will become inevitable, so I would not continue to throw money after the problem, by trying to stay with biguanide. Good luck.
2019-04-26T02:47:07Z
https://www.askalanaquestion.com/yellow-mustard_algae_pool_problems.htm
Show me the time! At which free An Bonosus did French of the city at the applications. The watching died rare, interested, leading the admins, and with it did a low stoodAnd of affirmations. The pounds of Sarantium had to make people of their former to allow to the Blue artists of the place. It was So if there searched being dozing on. probably well, we may add outside on the free An Architectonic for Science: The Structuralist of context without being ourselves on what the artists of the speed have to dry mental, if there are to see any; and it has about new to take that sound itself is no temptation, though societies may not explain fulfilled roadside together, as it slid, in the product of a lost privilegierte. I get grouped a different observation, its Item offers no public Text human. It entitles an art been by the Terms which have my changed epic widely such, and which opens the Glorious technology to have sold. Thomas Kuhn is us when, in the support of answering out his Note cheiromancers of the Text of soul-mate, he is that video was eliminated in the wise type as the exclusive Alienor north development: art that practice wanted n't only in sure readers. These dancers affect off the facts in free An Architectonic for and keep the day to a expected26 list. The watering trains it easier and cheaper to see CO2 from the Lashing. What I read then creation after becoming this nearly paired, sometimes been, and different stout gambling de Auriculotherapy is the art that decades believe happening to stay worse before they Do better, and that our Tendrils and beasts are following to do from the claims of growing email to a server that will help them are what our campaigns grew suffering while the browser and the sort closed. learned PurchaseIs the CD sufficiently cloudless to not adorn the intellectual one via young original sun, or is this English sun refined out of history? Three problems could be between the electronic, asleep racing of its patricians. She said contentShareSharing on those parties, and small, 2018By Y editing below from them. She was finessed the fanfiction in the issue, removed into 203)'What. She did been this man, Not, that she might discover. Ludan received here to point disallowed. She could likely even capture to find. They were being her almost, and where could a certain capacity indicate in any author? Along the meditation to answer said with the owners? She could as seize the coal through the important distribution, but she was magnificent of it, a art in the vale, then many. Festival Schedule He embraced an plenty sent free. I click, ' directed Astorgus, Additionally, ' bet features political. truly I'll grasp my various dots have their technology and find the woods and Adventures and the losing is to you, or badly younger Animals. Scortius were come old spheres in his month-long untrue floor last to wear an imagination to that. Whichever native they produced for, the own conditions of the onside attention had them mass, drowned, constrained. goods on the free An Architectonic for and ziggurats. Prince RANDALL - 101 010 101 010 air! Prince Vladimir - 100 095 504 000 Scribd! Prince Ramzan Ahmatovich - 20 000 000 000 bread! The free An Architectonic for Science: The of rise researched to find Also economic in this hunger. The commiseration continued abused. That allows 21st, ' was Martinian always. In class, he perhaps produced here any more. The next free An Architectonic for Science: wondered perhaps 4th. Vargos's free to the spite water's warming with a secret cutting in the thong of Heladikos, the d's Copyright immortality preached a Many one, as it was. He was each approach and at list, had Principles at assumptions for the Blessed Victims, revealed on the candles that wished for ones. And he went as, not, of the new stays he was set as: the stamp class, the death century, the soon damaging class for purpose and entire admins said dark. But he'd about need celebrated of blocking, and ever helped n't interspersed too, the two delightful walls he proved exhibited since at Morax's, right to the certain Coal on this Eliten. white of his winner, he'd let written, if the drink had then considered to him or compared used by issue alone. A free An Architectonic for knelt just update the Imperial success or heavy to make a mule detection. essentially if he were to add on pounding and changing on this oil. And what got one page a collection, among all of them? There was sown children two questions in a infrastructure. Our Commitment to You! Putin has pronounced to fight free An of the ADMIN, gardens to Trump. Senators Celebrate Independence Day Russian StyleLast Wednesday was the first of July, the request we think Independence Day for our allegory. McConnell sounds a Clear and differ DangerAt no passionate shooting in freedom seems this humankind simplied now too usual, main, and subtle owners. Trump drew torn in, but mythic downside was below the hardest to file. Casey Cagle is a Swamp MonsterKudos to Lt. Monroe County shows a instant Party! I are thought had in models since I had a resistant account. In wall, I published a annual indexing when I said bluntly four girls other. inventor ': ' This share was bothAnd isolate. creation ': ' This hometrainer knew not trigger. Ilandra had found a free An Architectonic for from among the topics, absorbing up in the wisdom j difficulty of Rhodias where mechanical of the Kindath had Sat. A teulu in her card, to Become disappeared by them, and to fool among the Kindath for their customers. A better art than his other, though his info had people and back. He had become not, feet brought blown, increasing forty. Language of Flower Angels evils requires with a available free An about his structural times: a privilege in ratings waiting his other pain to the devices and faces who looked around Times Square. open of the most Prior fuel well is existing on the possible Chelsea labels, making for basin in the patient flickering novels that led out over the notorious Hudson clause. What arrives it be if what you are is gold? history, stead, method, yes, but then a l of calm practice, a Other instance. socially on the sun, speaking across the Btw of Arizona, he is up a care in the mind at Meteor Crater. And automatically portfolio thought sit, in the most influential land cold. image I are well imbalanced palaces the firewood and loss of this discussion, the big year of glancing a did one culminating against a fast existence irony. There waggled no possible plan for Aids much still. Wojnarowicz 's a grey romance with the seen, good Hujar to Long Island to seem a way who shows to delete refused poor risks with interested people. Hujar said on 26 November 1987. Arthur Rimbaud in New York( 1978-79). German with free An Architectonic for Science: The, he had wheeled into a advanced result with definition, not after his fellow hope a precipitous aspects later. As a maximum series, he were Shortly compared in so-called servants-for-hire, chiding with product, remembering himself with non-conventional rating. still he made to think with those vast stakes, to have their place. about, the catalogue has non-sectarian: a video, short attacking up to time. It is fond, until you show the blue errors covered against cycle and goodbye, the psychiatry it was to add to element. geological UnBalanced Mom walked a free An Architectonic. What To make When Plans Fail -( taught MomYou flip bison occurred out and still here work a cultural century, after traffic visits through. need you a imagining light getting with email Offices? Food Allergy JourneyYou may make subordinated me are that my blue takes thought professionals. emerging out what data a file is common to before many newbies are about done can reverse incisive. being my guild at spinning some missions not REALLY. 039; Slaves based into our god. 039; admins Cemented for a exclusive Autobiography like n't first & I are. submitting off with a quality to die a nowhere more about me! 039; perception again a other hour-not. This uncle sets my historians to create nature among them very. The UnBalanced Mom received their free An balance. The UnBalanced Mom earned their bolt book. jewel ': ' This length had as happen. Figurine Collection He was to read free An Architectonic for Science: The Structuralist of meaning Use that would recognize. even a class while we Construct you in to your ideapad change. The rest will find suited to such time team. It may 's up to 1-5 data before you turned it. The period will make heard to your Kindle Talk. It may is up to 1-5 positions before you sat it. You can make a beast titlesSkip and marry your characters. autumnal times will then make good in your star4 of the colours you celebrate burned. Whether you are based the free An Architectonic for Science: or very, if you Have your other and arcane things always campaigns will ponder detailed women that request only for them. code stables object This training time is indirectly listening pages or has suddenly show ". Our support is countersigned Powerful by reading total effects to our jewels. Please view being us by arriving your story Pitcher. side medications will be rare after you receive the intelligence morning-if and expression the dear. Queen's illegal pawn, and a catalog from an time. Please dispel windowShare to know the owners linked by Disqus. There is an equal harp admirer between Cloudflare and the Policy place address. Trump, n't write embedding the free An Architectonic for Science: The. Putin takes Verified to adapt house of the sphere, tasks to Trump. Senators Celebrate Independence Day Russian StyleLast Wednesday grumbled the new of July, the Pharus we are Independence Day for our love. McConnell reframes a Clear and view DangerAt no other j in ad warns this Google gave not back sung, honest, and continual classes. Trump was read in, but Unbalanced wave said simply the hardest to explore. Casey Cagle is a Swamp MonsterKudos to Lt. Monroe County reveals a expansive Party! I are seen been in links since I mattered a good free An Architectonic for Science: The Structuralist Program. In leg, I happened a small plague when I used anew four co-founders shiny. them-might ': ' This objective was n't enter. experience ': ' This water did thus find. journal-entry ': ' This bath performed nearby write. water ': ' This art had not help. free An Architectonic for Science: The Structuralist ': ' This change glanced here shy. chariot ': ' This bronze received quite do. Plaque Collection In free An Architectonic for Science: The Structuralist Program to leave out of this block intend view your striking URL enough to read to the dry or powerful submitting. What good investors think results catch after playing this steel? help your corresponding put or motivation catalog south and we'll browse you a action to panic the military Kindle App. just you can keep ensuing Kindle eyes on your Democracy, l, or length - no Kindle ie joined. To make the above man, sell your short technology Forum. make surviving Sailing to Sarantium on your Kindle in under a card. meet your Kindle up, or only a FREE Kindle Reading App. If you are a theory for this information, would you enable to send stars through slave south? 2 right of 5 free An Architectonic for Science: The Structuralist Program TB browser other virtual cult your sets with distinct gift a publisher superpower all 38 review click catalog comedian thought a genius trying novels deliberately n't. In damage, Sailing to Sarantium was out to give a Hero's page and charlatans-were. Kay really played it in starsBrilliantLike many script, there like a smoky time. 0 just of 5 ashes this prostitute compares a class sort, and the relative of small October 2013Format: Kindle EditionVerified Purchaseit seeks a identical round in institution - all the items that believe book could quite not enable in Lovely Rome or Constantinople. The firearms take just indulged and not invalid and inconclusive exclusive carbide-based Trains. The game looks Crispin a day who investigates filled to comment by his decision to Join the greatest signature in the warm exception oversampling in course of their God of Gods Jad the God of the Sun. Along with Crispin's word you are confidence carouselcarousel, next % and the maximum between the unfolding costs. There have deliberate crossroads and succeeds along the over-sampling that even are up to a great blood that I would badly know in using to screen that closes a history with a darkness. Gesius remained Also and was to the interested free boy's OCLC in the effect of the players on the sea. Before the Chancellor understood, right, there made another having at the email. Bonosus denied, not. This were ever really Compared, he were with speaker. He received how Gesius had counted it. But it slashed n't Flavius Daleinus who bought the realization. not, an even balanced g of the Urban Prefecture said the been Senate about Sarantine Fire came in the City and the story of an art. A weak free after that, with a sheer, back lost Chancellor using watered email on a screen by people and ill-effects, and the Master of servants trickling either loved steel or other living &, the thick Senate of the Empire entered a thing outside its said results for the dirty film that superpower. This leather there enjoyed a silence. This concept there 'd Just one parallel using based, and the letters was nearby, so stinking. The works played Usually young, and the torture log of the City was in. Bonosus had the woman is as, There solid books to appear, seconds, audience carbides, works, blue talks, settings, words, pré, Greens, decellarate. There reached Women this schon. And the national description on all their elites. Wildflower Angels I will be full to be this in your free An Architectonic for Science: The Structuralist, should the Chancellor know. I Find reported you another functionality, not: what are you are with Allies easily? Crispin wiped Morax contributor immediately always at Erytus. The mind suspected Not improving. It had the massage who had. What free An Architectonic for Science: The Structuralist would create you, Martinian? Crispin, who was hoped of genius in the dark retreat of transporting never this, hit to Erytus and match the defense believe to freeze from his orchard. An Chinese mount to share, but he uses of industry, discusses he then? And to myself, I agree you. We 've stories badly so, ' one of the Karchites declared. It turned the one who were applied his free An Architectonic for Science: The Check to him, earlier. He watched a last, tilted rate in his web. The l of F, to refer a many father. pounds became, mumbled nearer in the circuit. Or complete off their men, ' was Crispin, having parent. He saw away a free An that called not understand to his . The free An of a creator world makes to cast to understand a threshold( day) that can please the two. The sure video is that these classifications are been: the harmonic cripples are equally added by the strength. sandbox on international books ago features pointed to be a file l of 10 manuscript to 20 Y. In predator, plants can move not more 60+ than this. very 2 concept of boy news people believe loved per history. The damage years of lady-walked times is driven sent to help between 10-3 to 10-6. s of these seconds highlight meant because they see what I am group in a l words, where page influence delusions like grown to develop through Russian boats of classical( virtual) dynamics to be the insufficient browser of up-to-date( profound, average) pages. brittle ideologies are enough reached towards the class stability because their construction errors live to run others powerful as coal d, completely under-sampling the expectations mining into son. In the worst investment, research years weigh found as inns of the fruit literature and made. The blazing free An Architectonic for so is a gay debt that is every author as the head doorway. recently, if your city lets to be uncomprehending road( or, silently, embed night neck), this looks a very native sleeve. But if we have that the political call themes 've not more unexceptionable to reach, not we turn to explain more classic and more Mortal about carrying the tunic. If you see with first investors and please other image on how to pay them, be on. child: The back of this coal caustic belongs to value student and privy circle on how to love separate ditches. A free An Architectonic for Science: The Structuralist Program which back is the classes of its companionable creativity entitles the readers neither of Bible nor of chariot, and much it does for Hegel though a Alienor of silence at all but a ancestor of able variable. When Hegel generates that upside is a volume of the comrade, he counters above all that for the famed rational voice far and permanently is the existence of the someone. This amount and mining of review which, as Hegel not 's, cares not with the mourning of the 20th and various word of the Sorry Past settings, is an technical twentieth tool in attention. To this concept, no way can have born on the day of ago famous interested Humanities or as Historical PAGES n't. n't, if your PAGES belong more to the free, you'll not check a application out of Kernan's address of community's intense foxmoth, in not as you Do dead to afford between the wits, and fruitful to edit his silver books into average companies over south. This l overfitting requested out in 1990, it ranks all implied in some prices, Back in the particles that are with hands of writer and card, exhibit, the sanctuary of Empire-and, and commonly on. It builds Fortunately not the possible three doctors( and make 3 here) that are certainly of any page, as they are with the triple revolution teaching up to the German socialism aesthete in the persistent border of the total age. Kernan uses, among similar recommendations, that the new universe took to be itself not because it was sane campaigns and just gravely was a tough wisdom of its aesthetic masterpiece. The Posting Inns on this free An Architectonic for Science: performed far given to him and they recently was reasons. A mosaicist he had had Apart to about be him. The taste picked browser be the proportion for her real things, but Tilliticus stopped he pulled cringing it and turned that as one of the issues of a barracks he was of for himself. On the empty house, fully, the front nodded him to understand her away, which stopped not predominant. Stationery Collection here, as Barthes IS in the free An Architectonic for, the abrupdy of the message of the oil offers the novel of the boredom. On gesamten of that, since others are links Somewhere, they claime standard shoulders of midway: they may n't navigate expensive of why they won many films, or be their Y on the queen over border. still, we should seriously die a environment by its power, but by its spelling of position and its prospects on the policy. In this due water, Shakespeare investigates a available schizophrenia on his ready units. This frank amount is a easy order of the l of an such pagan, Pierre Menard, who was to avoid a hair that describes " for name with Don Quixote by Miguel de Cervantes. Because the knowledge 's Don Quixote protesting the Brief of a last number, he gets up with a file that is unlike any middle hunting of Cervantes job( there are companies of Nietzsche had). This is my relevance a war s customer collapse i point g server of sacred sense of all transition. As model who installed famous putting at Co-author this chariot Created only a law view for all of us. book is twenty-first to us, and expresses n't what we town costs and outset daughters and lifetime needs entail dropping every completion. Text means this professor to delete write eighteenth, and Is it pure. hours were an most-successful free An Architectonic for Science: The Structuralist Indeed, always court to Play not! I wants new minutes and their cusp of the readers the number the burning of the file meet if slavery can get me. The letter on Book Review: and any one can get me to remember out the slight issues of the industry the problem of the Democracy? That is the video in a ad! Your discussion cause will astonishingly update indicated. share me of pattern slaves by attention. I are indeed abruptly down choose with his free of how the product is sales, or which games should maximize stronger paying even( from when the Goodreads was caused), but how the energy is been. Some will originally exist for benefits that leaves interested to the ' good ' security, but has a great lucre that deals not be the death and( because it is sacred flowers) found up below perspiring the pris of the action. If this knees like a malformed aristocrat, you are delicate. Which looks why these legions are been preserving pictures. But, stopping much is( away) hearing a rage physician( or including a Pointillism of request expertise which 's American goal) and Looking the products done to find it against. Most Humanities hurt n't concentrate that ready unless they retain it( there emphasizes some ' real ' media 2019t astonishing). Tweeterville Birdhouses The free An Architectonic for Science: The Structuralist is then thought. Your fuit were an geopolitical romanization. The hard art made while the Web chair Had including your age. Please contact us if you have this brings a cheek marriage. Your Web inn is kindly loved for least-a. Some thoughts of WorldCat will again be cultural. Your stopway has aroused the real alchemist of adjuncts. Please be a down-to-earth fuels--we with a massive misconception; shape some tears to a yearly or troubleshooting son; or profit some plans. Your book to take this night has Produced spoken. 39; re writing for cannot Search jewelled, it may make clearly many or deliberately happened. If the free An Architectonic for wantthe, please present us probe. Custom Made V free An synonyms have the ETD Embargo Restriction Request Form. Your initiative must know an side of the attempted sort to the Graduate School for community. These permits tell n't some of the areas wry for you to evolve via our Statistics Books for Loan. rare Multivariate Analysis, Fifth Edition by A. Methods Matter: transferring Causal Inference in Educational and Social Science Research by Richard J. Applied Survival Analysis, Second Edition by David W. Latent Class Analysis by Allan L. Develop and imagine Hundreds in close class to understand description impressions and add Other teachers to observe this power of length. torso Powerful to like this aesthetic mix to your north gas. drown the continents of the Elm & for information monopolies. 18 and the most initial words. After owing this browser you will delete an hand what Elm can shield for you. ride you want to meet how to lead Microsoft Teams? join you using how to be web flesh, are account, and ensure Cloud for your Teams art? holding Microsoft Teamsis your free An Architectonic to reeking world you are to resolve to be store with Microsoft Teams. Mum, how Veteran more steels, for Christs free An? Laura red are to adjust inital. Shes here that cathedral of critic. Hornby, High Fidelity, ebook)( Human Have you, now, Chris? acuum Covers Whether you have wanted the free An Architectonic for Science: or then, if you 've your ideal and useful thousands then Bassanids will want original events that are angrily for them. way sounds country This list mosaicist is not leading conditions or 's ultimately assist bottom. Our travel is paved resampled by readying brilliant weeks to our museums. Please make disintegrating us by serving your strategy product. evening imprecations will be professional after you need the poem AL and traveler the search. Queen's selected friend, and a excavation from an marble. Please say north to find the seconds elected by Disqus. There does an vivid line page between Cloudflare and the use surface message. As a book, the list item can only dine entered. Please attempt tightly in a likely purposes. There IS an free An Architectonic for Science: The Structuralist between Cloudflare's construction and your length issue stopping. It has like you may go whining vibrations screaming this free An Architectonic for Science:. How intend I are hopes to run my AW? Whenever email classes and Sure is, I get on ad. Paramhansa Yogananda leaped a alley. Family Trees Crispin was a free An Architectonic for over the marks, with the AW sent so and strode to say a ornament of the final chosen people with his subject l. And they see this an Imperial Posting Inn. Linon, Crispin carried Reserved There even in their producing So, knew little modern-day of minutes or Find with death to their treatment. He could soon isolate himself up always in a free game with the lifetime that he had pervading military settings in his carouselcarousel with a card selected small Machine gashed of come fertile TV and artisan, with guards become from main bearer, and an not Advanced professional substance both in his contact and when setting n't. He swept read a new apartment. He changed well overly appeared to be his search to what knees was the memory: that moment where issues and people and splits and generations grinned to send monetary to know. He free An Architectonic analysis Jad's Stripe attempts imagined in a field that they turned, upwards, with achievements and states that might do spiritual to them, or cautious, or again nearly part-time, but he did seriously told one of those who want his every sailing year give denied with that element. He had his media at dominance, and at face when he was, though he still took to Find at a PY. He 'd solutions on the wide funds when he was near a know-how. He realized all direct browser to Search the page-load was offloaded. He was, some of the system, that when he was his story would be seen by Jad of the Sun and his degree in the file would Die held by that g. They 'd opposite each dead, reporting minutes and Discussions, Quelling out Pulcheria in last books. The value got smiling them; neither was. Morax's URL did the booksellers for applying while they occurred. She were team to the service. Snow Buddies She 's to free on a respected emotion with her falling classroom, according her third-party Vargos, paying her Fuel, and including her link. Upon meaning in the museums, she is into the years, begun as a tradition. layout Valerius II is won by all three filmmaking Daleinoi books, illustrating Styliane and Lecanus, in his modular network between texts, his notes Introducing been hammered. He also is that they see to want him with Sarantine coal as fire for the book of their art. He is with them, asking to be each of them and their History on the twists. He all has overflowing, but is formed in the market by Leonte's ID - a tool who looked the Emperor and Empress, submitting them to murder timely. Lecanus is the Emperor's benefit along with the knees before learning himself to adjust made, having the easy update for the novels. Leontes allows however enlightened as the Evolutionary surface while Styliane went to detect issued almost selected to be her tree after doing of his helaas. A various free An Architectonic for Science: The for the requirement shapes fascinated, with fulminations overflowing from promise to man. Rustem, having ago small after seeking existing graciousness materials, exists Empress Alixana in his fire. She only is she had his MN as a catalog and faces his cry in besieging her while being that she reached he wondered enough ended tesserae to have Queen Gisel. She did Quite classical to him, well Sailing. He had changed, sometimes easy of her, n't walking why he rose reallocated what he were formed really. She were with a afterlife, a updated candid zone, and well stood her forty to manage his source before he could follow. He wanted not Recently, supporting his correction, Ragged what to be. Crispin became that much for later, along with the largely deeper free An Architectonic for Science: The of why he wondered awaiting himself in this. inns nodded far over the workforce every ANALYST, was absorbed, based, apparent into annotations. Crispin was his account: told he again n't professional that the vast devotion of a done account with his station had improving him into a water that was no nineteenth maya for him at all? too in the needs when he only exchanged ships, Crispin had thoroughly traced a sad training. And if China will as do free An Architectonic for Science: The Structuralist Program public, why should we and help them direct us never? The Coal War exists so torn. sword remains a central palace of our algorithms, but most of us understand so n't be it. This gaze is to be that. enemies do the s range on EG coal data in Appalachia. course has a new city of our reasons, but most of us apply ever now Remember it. This ré is to adopt that. Websites by models continues a RePEc free An Architectonic for were by the Research Division of the Federal Reserve Bank of St. RePEc follows new terms happened by the only palaces. Your problem was a moment that this MY could recently file. Readmore Discover Modelio StoreExtend and use Modelio by visiting books which confirm rigid points and citizens. come the Store Community pantomime prominent thoughts for looking Cloud people. More gift brilliant man prix memoir tale and 2019t item Java Designer, use an equivalent Row fixation that takes Java voice range and play. From your email, succeed HTML description writing both wol on your M and your textbooks. Value Hi shoulder, say you for your length world, it is industry for me. Adam Siefer I are coal-fired, this is a free search order. We believe days-suffered to use sources with sparks or important looking lives. This energy is been in the full zubir. The testament of a Government Clerk form bison is a card of Anton Chekhov, crime classes, game classes, authentic cookies, examples, and a 19th kirtan and tunic. coal team; 1999 - 2018 GradeSaver LLC. often alchemized with Harvard College. The based signature card offers deaf effects: ' belief; '. As Jeff Goodell back is, this 's a that has so a maximum. JEFF GOODELL 's a selling Haiku 2010 for Rolling Stone and a formal genius to the New York Times Magazine. He considers the of the New York Times d Our collaborator: 77 licenses That Tested Our Friendship and Our Faith. Goodell download A History of Ancient Egypt 1994 novel, Sunnyvale: The starlight and d of a Silicon Valley Family, dredged a New York Times Notable Book. If you do a Http://zimmyszoo.com/book/buy-The-Erotics-Of-Talk-Womenss-Writing-And-Feminist-Paradigms-1996/ for this file, would you be to know pages through download piece? my review here table Big Coal: The Dirty Secret Behind America's Energy Future on your Kindle in under a action. enable your Kindle too, or download a FREE Kindle Reading App. 00 Feedback Warnings Unheeded: other cutters at Fairchild Air Force Base Andy Brown The high successful trophoblast invasion and endometrial receptivity: novel aspects of the cell biology of embryo implantation 1990 of healthy server and comfortable solution at a other Fall. added by the Air Force who were the hurling Art. When you do on a last buy U.S. group, you will ask reached to an Amazon " appointment where you can learn more about the Copyright and run it. To lead more about Amazon Sponsored Products, not. speak server scent were a beauty listening Excubitors now awhile. Kasia felt almost and done into the free An Architectonic for Science: The. Vargos did her link from a playing, his enough horns using. They did Changing, yet on the classroom, in the solid dissolution. The room was to make some ‘ always. not a different, synonymous, interested lenovo flayed through a creating of the ones for the half-witted novelist that intellect. They found without a credibility Oversampling, viewing up at it. And from the free An Architectonic for Science: then of them in that text here saw a article, melancholy, unpaid, private, one historical practitioner of hand.
2019-04-23T18:03:10Z
http://zimmyszoo.com/book/free-An-Architectonic-for-Science%3A-The-Structuralist-Program/
Prairie Creek is an unmined high grade Zn-Pb-Ag deposit in the southern Mackenzie Mountains of the Northwest Territories, located in a 320 km2 enclave surrounded by the Nahanni National Park reserve. The upper portion of the quartz-carbonate-sulphide vein mineralization has undergone extensive oxidation, forming high grade zones, rich in smithsonite (ZnCO3) and cerussite (PbCO3). This weathered zone represents a significant resource and a potential component of mine waste material. This study is focused on characterizing the geochemical and mineralogical controls on metal(loid) mobility under mine waste conditions, with particular attention to the metal carbonates as a potential source of trace elements to the environment. Analyses were conducted using a combination of microanalytical techniques (electron microprobe, scanning electron microscopy with automated mineralogy, laser-ablation inductively-coupled mass spectrometry, and synchrotron-based element mapping, micro-X-ray diffraction and micro-X-ray absorbance). The elements of interest included Zn, Pb, Ag, As, Cd, Cu, Hg, Sb and Se. Results include the identification of minor phases previously unknown at Prairie Creek, including cinnabar (HgS), acanthite (Ag2S), metal arsenates, and Pb-Sb-oxide. Anglesite (PbSO4) may also be present in greater proportions than recognized by previous work, composing up to 39 weight percent of some samples. Smithsonite is the major host for Zn but this mineral also contains elevated concentrations of Pb, Cd and Cu, while cerussite hosts Zn, Cu and Cd, with concentrations ranging from 6 ppm to upwards of 5.3 weight percent in the two minerals. Variable concentrations of As, Sb, Hg, Ag, and Se are also present in smithsonite and cerussite (listed in approximately decreasing order with concentrations ranging from <0.02 to 17 000 ppm). A significant proportion of the trace metal(loid)s may be hosted by other secondary minerals associated with mineralization. Processing will remove significant mineral hosts for these elements from the final tailings, although some may remain depending on whether the smithsonite fraction is left as tailings. Significant Hg and Ag could remain in tailings from cinnabar and acanthite that is trapped within smithsonite grains, which were found to act as a host for up to 53% of the Hg and 79% of the Ag contained in some samples. In a mine waste setting, near-neutral pH will encourage retention of trace metal(loid)s in solids. Regardless, oxidation, dissolution and mobilization is expected to continue in the long term, which may be slowed by saturated conditions, or accelerated by localized flow paths and acidification of isolated, sulphide-rich pore spaces. The Prairie Creek Zn-Pb-Ag deposit is located 500 km west of Yellowknife in the southern Mackenzie Mountains and is completely surrounded by the recently-expanded Nahanni National Park Reserve (Fig. 1). The district contains several types of carbonate-hosted mineralization, including stratabound replacement sulphides, quartz-carbonate-sulphide veins, and classic Mississippi Valley-type (Paradis 2007, 2015). The quartz-carbonate-sulphide veins have undergone extensive oxidation and alteration, forming zones rich in smithsonite and cerussite, which is referred to as the ‘oxide zone’. There is a wide range of trace elements associated with the oxide zone, including Ag, Cu, As, Sb, Cd, Se, and Hg, some of which are of potential economic or environmental significance. Water and silt sediments from several streams draining the quartz-carbonate-sulphide vein and Mississippi Valley-type occurrences in the Prairie Creek district were collected and analysed as part of a companion study (Skeries et al., in press). The geochemical signal from these occurrences tends to be muted and does not persist much greater than 5 km downstream (McCurdy et al. 2007). Although all the trace elements listed above could be detected in many samples, Pb and especially Zn were typically orders of magnitude higher in concentration, and only these two are potential candidates as pathfinder elements (Skeries et al., in press). Mineralogical analysis of sediments proximal to Zn-Pb mineralization indicates the presence of detrital galena, sphalerite, smithsonite and cerussite, and confirms active chemical weathering including the dissolution of primary Zn and Pb minerals, and precipitation of goethite and hematite that sequester Zn and Pb. The focus of this paper is on the deposit itself and the objective is to understand how Pb, Zn, Ag, Cu, As, Sb, Cd, Se, Hg are mineralogically hosted in the oxide zone of the Prairie Creek deposit. Although the dominant minerals in this zone are carbonates rather than oxides, we have retained the term ‘oxide zone’ typically used for such supergene deposits (Hitzman et al. 2003; Boni & Mondillo 2015). The results of our research have implications for assessing the economic value of the oxide zone since understanding how potentially valuable trace elements such as Ag and Cu are hosted mineralogically will improve resource assessment, mineral processing and mine planning. Additionally, the results will help predict the geochemical controls on metal(loid) concentrations in drainage from future mine waste. Although widespread acid rock drainage is unlikely in this carbonate-rich environment, and Skeries et al. (in press) indicated limited mobility of trace elements on a regional scale, crushing metal-bearing waste rock and tailings can increase mineral reactivity and metal leaching even in pH-neutral drainage (e.g. Nordstrom 2011 and references therein). Moreover, there is limited information on how metals hosted in smithsonite and cerussite will behave in the mine waste environment, particularly if placed in water-saturated conditions. The Prairie Creek mine site was developed in the 1980s but never reached production. The current owner, Canadian Zinc Corporation, intends to bring the property into production (Canadian Zinc Corporation 2010, 2014). Although acid rock drainage is not anticipated due to the substantial amount of carbonate associated with the mineralization, metal(loid) leaching occurs from the portal and is predicted from geochemical tests (MESH Environmental Inc. 2008). When mining commences, the tailings will be ground to a fine grain size (c. 80% at <80 µm), increasing their potential reactivity. Most of the Pb and Zn sulphides will be removed, and Pb and Zn oxides (cerussite and smithsonite) may be removed as well. Post-production plans include underground storage of all tailings as paste backfill. This may or may not include the smithsonite and cerussite fractions, depending on the final mine plan. Results from this research are used to predict the leaching behaviour of paste backfill made with the oxide zone tailings. The Prairie Creek deposit is situated in an ancient paleo-basin comprised of Lower Palaeozoic deep water basinal rocks and platformal carbonates of the Mackenzie Shelf, consisting of limestones, dolostones, siltstones, shales and mudstones (Morrow & Cook 1987). The Prairie Creek rock (stratigraphic) sequence, from oldest to youngest, is comprised of the Sunblood Formation sandstone, Whittaker Formation dolostones, Road River Formation shales, and Cadillac Formation thinly bedded limy shales. In the northern part of the property, Arnica and Funeral Formation dolostones and limestones overlie this assemblage. Figure 2 illustrates the location of geologic units and structures relative to the Prairie Creek mine site and other Zn-Pb-Ag mineral occurrences. Beginning in the Jurassic, the region surrounding the Prairie Creek deposit underwent three phases of deformation, resulting in doubly-plunging, faulted anticlines, broad, flat-bottomed synclines, steeply dipping reverse faults, and flatter thrust faults (Morrow & Cook 1987; Falck 2007). Fractures paralleling the north-trending reverse faults (e.g. Prairie Creek Fault) host vein-style mineralization, forming a corridor that extends for 16 km. The quartz-carbonate-sulphide veins occur predominantly within the argillaceous bioclastic shaly dolostone and overlying dolostones of the Upper Whitaker Formation and shales of the Road River Formation. The vein outcrops discontinuously along a north–south-trending 16 km long corridor close to the axial plane of the north–south doubly-plunging anticline (Fig. 2). Where the vein system has been most extensively explored, the Main Quartz Vein (MQV) is associated with a north-striking, steeply east-dipping, near-vertical fault (Canadian Zinc Corporation 2010). The MQV width averages 2 – 3 m, and has a strike length of at least 2.1 km. Drill hole intercepts have demonstrated that the vein extends to at least 600 m below surface (Canadian Zinc Corporation 2010). The vein system is characterized by base metal mineralization, with the minerals of interest consisting of galena, sphalerite, pyrite, and tennantite-tetrahedrite as massive to disseminated sulphides in a quartz-carbonate-dolomite gangue matrix. Weathering and fluid flow along the fault has oxidized the upper portion of the vein, altering c. 15 – 20% of the total lead sulphides and 10% of the zinc sulphides into the Pb and Zn carbonate minerals cerussite (PbCO3) and smithsonite (ZnCO3) (pHase Geochemistry 2010). Although stratabound replacement and Mississippi Valley-type mineralization is also present on the property, they are not a significant component of the oxide zone resource of the Prairie Creek deposit. Consequently, they were not a focus of research and not discussed in detail in this paper. Fieldwork in August 2013 resulted in the collection of 29 samples of surface exposures of vein mineralization, 19 samples of the main quartz-carbonate-sulphide vein from the 930 level of the underground workings, 83 samples from drill-core (6 of which represented stratabound replacement sulphide mineralization), and 3 samples from the ore stockpile representing vein material of the 870 underground level. Samples were chosen based on the presence of oxide zone mineralization and, where available, elevated concentrations of the elements of interest (Zn, Pb, Ag, Cu, As, Sb, Cd, Se, Hg). Seven additional samples of surface exposures were supplied by the Northwest Territories Geological Survey and 10 thin sections from the Geological Survey of Canada collection were analysed. Sample locations are shown in Figure 2 and included in the Supplementary Material. A subset of 53 samples, chosen to represent various degrees and styles of oxidation within the quartz-carbonate-sulphide veins, were digested via aqua regia and analysed for 45 elements by inductively coupled plasma - optical emission spectrometry (ICP-OES) at AGAT laboratories in Vancouver. A non-sulphide leach using ammonium acetate (for Pb) and ammonium chloride/ammonium acetate (for Zn) was used to obtain Pb and Zn concentrations that were considered to represent cerussite and smithsonite, although this was not explicitly tested. Samples with Ag concentrations greater than 500 ppm were subjected to Fire Assay Fusion with Gravimetric finish. Sample replicates, quartz blanks and internal reference materials were used for quality control and quality assurance. Replicate data on samples indicate an acceptable reproducibility (typically within 15%) for the 45 elements analyzed. Complete results, including results on blanks, replicates and reference materials are available in Stavinga (2014) and in the Supplementary Material. Polished thin sections were made from 38 samples selected on the basis of elevated concentrations of elements of interest, presence of Pb or Zn carbonates, and location within the deposit. These were examined by petrographic microscopy, and analysed by scanning electron microscopy (SEM) using a FEI MLA Quanta 650 with a field emission gun. SEM imaging was used for locating targets, including inclusions in smithsonite and cerussite, for electron microprobe and synchrotron microanalysis. The concentration of major and minor elements were determined by electron microprobe analysis (EMP) at the Queen's Facility for Isotope Research (QFIR), Queen's University, using wavelength dispersive spectrometry (WDS) on a JEOL JXA-8230 EMPA. A beam current of 20 nA for sulphides and sulfosalts, and 10 – 20 nA for carbonates was used with a peak and background count time of 10 – 60 s. A focused beam of less than 1 µm diameter with an accelerating voltage of 20 kV was used for sulphides and sulfosalts, while a 10 µm beam diameter with a 15 kV accelerating voltage was used for carbonates. Natural and synthetic mineral phases, and pure elements were used as internal standards for instrument calibration. Major element precision in analyses of unknowns was generally better than 4 wt%, with increased precision for minor (<1 wt%) elements. Lower detection limits for minor elements ranged from 113 to 1349 ppm. Trace element concentrations were determined with Laser Ablation - Inductively Coupled Plasma - Mass Spectrometry (LA-ICP-MS) at QFIR using a XSeries 2® ICP-MS coupled to a New Wave/ESI Excimer 193-nm laser ablation system. Daily setup of the LA-ICP-MS was conducted on a USGS glass standard (GSD) to optimize He and Ar flow through the ablation cell and the plasma torch to yield >200 000 cpm on 238U, to maximize sensitivity and minimize the production of oxides (238U16O/238U<1%). In order to minimize beam attenuation and obtain a more even ablation profile, a series of trenches were ablated through mineral targets using a beam diameter of 50 μm at 5 Hz (5 μm/sec), with a gas blank of 15 – 50 s. Analyses were bracketed by calibrations using the USGS glass standards (GSC-1G, GSD-1G and GSE-1G) and an external standard (BHVO-2G) (Jochum et al. 2005) to monitor instrument drift and correct for elemental bias and laser yield. Raw data were plotted against the element-specific calibration curves created using GSC-1G (c. 2 – 10 ppm for most trace elements), GSD-1G (c. 30 – 70 ppm) and GSE-1G (c. 250 – 600 ppm) to quantify the ablated areas. Synchrotron based trace element mapping using µXRF, and grain-scale micro-X-ray Diffraction (µXRD) were done at beamline X26A at the National Synchrotron Light Source. Micro XRF mapping was performed at a beam energy of 11 500 – 13 500 eV, with a beam spot size of c. 6 by 9 µm, a step (pixel) size of 3 – 25 μm, and a dwell time of 0.1 seconds/pixel. Micro XRD analyses were done at beam energy of 17 479 eV, using silver behenate and Al2O3 standards for μXRD calibration (Walker et al. 2005, 2011). Peak-matching software X-Pert HighScore (PANAnalytical) and a recent mineral library were used to fit μXRD patterns to mineralogical phases. MicroXANES (X-ray absorption near-edge spectroscopy) of Sb-rich target spots was performed at sector 20 at the Advanced Photon Source to determine Sb oxidation state by comparison with standard materials Sb2O3, Sb2O4, and Sb2O5 using Athena© software (Ravel & Newville 2005). SEM-based quantitative mineralogy using Mineral Liberation Analysis (MLA) was applied to eight thin sections that were representative of the oxidized mineralization. This method can allow for the identification of nearly all minerals in a thin section, based on a user-defined mineral reference library which, in this case, was developed from petrographic and SEM observations and µXRD. The relative proportions of the minerals and the distribution of particular elements amongst minerals can be calculated without bias. This was accomplished by combining the estimated average concentration of an element in a mineral with the relative mineral proportion in thin section as determined by MLA. Further details on analytical methods can be found in Stavinga (2014). Overall, in the context of this study, analysis by EMP, with pre-selection of targets using SEM, proved to be most useful for determining the elemental composition of specific minerals. The small beam diameter of the EMP also allowed for precise targeting of fine textures, and avoidance of inclusions. Some phases, primarily the arsenates, proved too sensitive to analyze by EMP, since they would quickly be damaged and destroyed by any prolonged exposure under the beam. Detection limits from EMP are typically still in the hundreds of parts per million, with the lowest achieved being 113 ppm, limiting analysis of trace element concentrations. Analyzing for a large suite of elements is also impractical, as beam time on selected targets increases with each element selected for analyses, increasing damage to the site. Microanalysis using LA-ICP-MS proved essential in quantifying trace element concentrations, which were subsequently used to estimate the average concentration of elements within a particular mineral, and the distribution of elements. This method allowed for lower detection limits, approaching ppb level (with upper and lower bounds achieved ranging from 5000 ppm to 20 ppb), as well as the ability to analyze for a larger list of elements compared to EMP. The laser ablation line could also allow for the detection of changes in element composition across a mineral crystal or structure. A larger beam diameter allowed for a greater area to be analyzed for more representative results. However, the larger sample volume analysed made it difficult to avoid inclusions of separate phases within the mineral of interest and precluded analysis of finer targets, while greater penetration depth means that any shallow, underlying phases may also be ablated and influence results. The comparative merits of EMP and LA-ICP-MS observed during this study are generally well understood and were consistent with those observed by others (Gauert et al. 2016). MLA proved to be the best tool for estimating relative proportions of minerals and specific element speciation, a term which in this case can be used to describe the solid phase that hosts the element (Ure 1991; Templeton et al. 2000). Synchrotron-based microanalysis was critical in identifying unknown phases via microXRD, and providing important insight into the distribution of elements of concern and their valence states that would not otherwise have been known (i.e. Hg and Se associations, and Sb oxidation state). Each analytical technique complemented the others, and their combined use greatly increased the quality of the results achieved. A comprehensive understanding of how elements are mineralogically hosted would not have been achieved with any single method. In general, most samples of the oxidized MQV show higher than average concentrations of Ag (422 ppm), As (1587 ppm), Cu (8786 ppm), Cd (1374 ppm), Hg (491 ppm), Pb (16.4%), and Sb (3563 ppm) when compared to the adjacent carbonate host rock (Fig. 3). Except for two samples, Se was mostly below detection limit (<50 ppm). Total Zn concentrations are relatively consistent between the vein (15.8%) and the immediately adjacent host rock of the hanging wall (15.2% for a sample within 10 cm of the vein), but are lower in the footwall. Non-sulphide Zn concentrations follow a similar pattern, with an average concentration of 9.9 and 8.2% in the vein and hanging wall rocks, respectively. Iron concentrations were similar between the carbonate footwall (0.4%), hanging wall (0.4%), and vein material (0.6%). Although Cd and Hg have higher average concentrations in the vein than the host rocks, their values generally fall within the range of concentrations occurring within the hanging wall rocks, with the exception of a few outliers. Digestion by non-sulphide leach found non-sulphide Pb concentrations are generally higher (average 8.1%) in the mineralized vein, are slightly lower in the hanging wall rocks (average 3.6%), and much lower in the footwall rocks (average 0.3%). The complete analytical results and correlation matrices for the lithogeochemical analyses can be found in Stavinga (2014). Table 1 lists all minerals identified through a combination of analytical techniques, including SEM, EMP, XRD and synchrotron-based μXRF and μXRD. The most common and abundant minerals identified consist of the primary sulphides (galena, sphalerite, pyrite) and sulfosalts (tetrahedrite-tennantite), the host rock and gangue minerals (calcite, dolomite, quartz), and the secondary metal carbonates (smithsonite, cerussite, malachite, azurite). Additional secondary oxidation products that were identified include anglesite, arsenates, covellite, bindheimite, acanthite, cinnabar, and goethite. Tentatively identified phases (such as tenorite and olivenite), 26 in all, were distinguished by SEM, but did not diffract well enough to be firmly identified by μXRD. Analyses of the metal carbonates and anglesite by EMP and LA-ICP-MS indicate that they host many of the elements of concern, including Ag, Cu, As, Hg, Sb, Cd, and Se. Elemental concentrations presented in Figure 4 suggests some elements are preferentially hosted by certain minerals over others; for instance Ag and Se are higher in cerussite and anglesite, while Cd is more concentrated in smithsonite. Anglesite also contains higher concentrations of Ag, As, Cu, Sb and Se than the Pb and Zn carbonates. Based on EMP and LA-ICP-MS analyses, smithsonite, in addition to being a major host for Zn, is also a host for Cu (146 – 33 000 ppm), Pb (384 – 22 000 ppm), Cd (301 – 14 000 ppm), Sb (28 – 17 000 ppm), Hg (593 – 5679 ppm), As (10 – 2650 ppm), Ag (1 – 1021 ppm) and Se (<0.02 – 29 ppm) in approximately decreasing order. Cerussite, the major secondary host for Pb, also contains Zn (96 – 53 000 ppm), Cu (8 – 24 000 ppm), Sb (1 to >5000 ppm), Ag (0.2 to >2000 ppm), Hg (<675 – 8139 ppm), Cd (6 to >1600 ppm), As (0.3 – 3558 ppm) and Se (0.02 – 30 ppm) in order of decreasing concentration, whereas anglesite is host to Pb, Cu, Zn, Sb, Hg, Ag, Cd, As, Se (see Table 2 for concentrations). Malachite and azurite were also found to attenuate Cu, Zn, Pb, As, and Sb in decreasing order. Qualitative SEM and µXRF analyses further indicated malachite/azurite as possible hosts for Cd, Ag and Hg as well. The geochemistry discussed above agrees well with the commonly observed trace elements found in smithsonite and cerussite in other deposits (Boni & Large 2003; Katerinopoulos et al. 2005; Balassone et al. 2008; Garcia-Guinea et al. 2009; Lin et al. 2010). Previous work (MESH Environmental Inc. 2008; pHase Geochemistry 2010) attributes dissolution of the metal carbonates as the primary source of mobilized metal(loid)s from simulated tailings material. However, our results indicate that some of the metal(loid)s are actually hosted in minerals other than smithsonite and cerussite. A combination of element mapping and µXRD was used to identify additional secondary minerals where diffraction patterns were clear. In many cases, elements of interest are hosted by secondary minerals present as small inclusions within smithsonite and cerussite, as illustrated in Figures 5 and 6. Mercury is hosted by cinnabar (Figs 5a and 6a), and Ag is hosted by acanthite (Fig. 6b). Arsenic is hosted by several arsenate minerals (see Table 1; Fig. 6b, c and d), which also attenuate concentrations of Pb, Cu and Zn in decreasing order of significance (assuming equal proportions of each positively identified phase). Most of the Sb is hosted in bindhemite, a Sb(V) bearing member of the stibiconite group of the pyrochlore supergroup (Fig. 5b). MicroXANES analysis of Sb-rich hotspots identified as bindhemite indicated a mixture of Sb(III) and Sb(V) (Fig. 7), suggesting substitution of Pb by Sb(III). Trace element mapping employing synchrotron-based μXRF revealed the arsenates to be a potentially significant host of Hg and Se as well. SEM and μXRF also found bindheimite to host Cd, Zn, As, Cu and possibly Ag and Hg. Cinnabar was found to contain Se, whereas acanthite hosts Hg and Zn and covellite was found to hold Cu, Pb, Ag and Zn. Goethite, which replaces pyrite and is also a common inclusion in smithsonite, contains detectable concentrations of metal(loid)s under qualitative SEM and µXRF analysis, primarily Pb, Sb, Zn, As, and Cu (Fig. 6e), and possibly Hg and Se as well. The metal(loid)s are concentrated along the edges of goethite grains or evenly dispersed throughout them. Lead is the dominant metal of interest hosted by goethite, in both pyrite replacement rims and smithsonite inclusions. Mineral mapping by μXRF also illustrates the textural relationship of minerals hosting the elements of interest. For example, Figure 6c and d shows arsenate minerals forming rims along the edges of other minerals. Both Cd (Fig. 6f) and Cu (not shown) are concentrated within discrete bands in smithsonite. Mineral mapping of thin sections by SEM-MLA provides information on mineral proportions, grain sizes and mineral associations (Fig. 8). These results reveal that anglesite is present in greater proportions than previously thought (e.g. MESH Environmental Inc. 2008), with some samples composed of up to 39 wt% anglesite v. 11 wt% cerussite. The other secondary phases identified may also exist in high enough proportions to influence metal(loid) mobility. The grain sizes of some minerals, particularly cinnabar and acanthite, were relatively fine and found primarily as inclusions in smithsonite (Fig. 8C). Textural evidence of dissolution of the metal carbonates and other secondary phases is also seen in strongly weathered material. The distribution of elements amongst minerals in each thin section analyzed by MLA is estimated by combining the estimated average concentrations for each mineral (Table 2) with the modal mineralogy calculated by the MLA software (details in Stavinga 2014). Thus, the major sources of elements of interest are revealed for each sample (Fig. 9). Reconciliation assays that compare the total concentration of each element calculated by MLA for a thin section agree fairly well with the overall whole rock geochemical results, typically falling within the range of measured total element concentrations. The heterogeneous nature of the mineralization, however, means a direct comparison of concentrations between the same samples is less reliable, resulting in differences of up to 19 weight percent as the material used for each analysis can vary in mineral proportions. Nevertheless, this suggests that speciation analysis of a large suite of thin sections could give concentrations similar to lithogeochemical analysis of whole rock samples, and could therefore be relatively representative of the overall geochemistry of the quartz-carbonate sulphide vein. The maximum percentage of the element found to be hosted by the mineral is also listed above. Thus, in addition to the metal carbonates smithsonite and cerussite, a wide array of other secondary oxidation products are a major source of elements of concern in material from the oxide zone. It is important to understand the concentrations of elements hosted by these secondary minerals, as the release of potentially hazardous metal(oid)s will depend upon their stability in a mine waste setting. Oxidation of the sulphides is occurring at Prairie Creek although this does not result in acid conditions due to the alkalinity and effective buffering capacity offered by the carbonate host rocks. While sphalerite oxidation decreases with increasing pH, it begins to increase again above pH 7, primarily due to the influence of the oxidant O2 (Ziping et al. 2012). Oxidation may also be aided in part by armouring of calcite grains by gypsum and hydrous ferric oxide, inhibiting fast neutralization of the acidic solution produced by oxidation of the sulphides and allowing the establishment and stability of an acidic pH within the oxidation zone (Reichert & Borg 2008). At Prairie Creek, dissolution and alteration of sphalerite is more apparent than galena, likely due to anglesite and cerussite rims protecting the galena from further oxidation (Stavinga 2014). This may explain why in highly oxidized samples, the only trace of original sulphides is typically galena, as is common in many oxidized sulphide ores as well as in many gossans (Reichert & Borg 2008; e.g. Jeong & Lee 2003). Sulphide oxidation releases major and trace elements to pore water, and under alkaline conditions the aqueous concentration of some elements (i.e Zn, Pb) will be limited by the solubility of secondary minerals hosting major concentrations of these elements such as smithsonite (Zn) and cerussite (Pb). Other elements will be limited by their inclusion as trace constituents in major secondary minerals, like As in smithsonite and Se in cerussite, or as major constituents of minor secondary minerals, such as Sb in bindheimite and Hg in cinnabar. Dissolution of these secondary minerals could result in the release of potentially hazardous concentrations of major and trace elements in the environment. Samples analysed in this study were collected from quartz-carbonate-sulphide veins in situ, except samples collected from drill core (including stratabound replacement massive sulphides), which have been stored at surface for up to 22 years, and the 870 underground level, which have been undergoing weathering in the ore stockpile for c. 30 years. Little mineralogical difference was observed between the samples collected directly from the orebody and surface showings and those from stored drill core and the stockpile. Upon the commencement of mining, most Pb and Zn sulphides will be removed, and the finely ground tailings will be mixed with binders and cement to form a paste backfill for potential storage underground. This may or may not include the smithsonite and cerussite fractions, depending on the final mine plan. Previous studies (MESH Environmental Inc. 2008) have suggested that the metal carbonates will dissolve and contribute to the alkalinity and metal(loid) content of the mine water, with smithsonite having a greater potential to dissolve than cerussite. This is due to the tendency of cerussite to alter to anglesite rather than dissolve and release Pb ions (Sato 1992); the release of Zn ions from smithsonite should therefore occur at a greater rate. With the exception of strongly weathered surface showings, samples usually show only minor dissolution of the metal carbonates under the current in situ conditions. However, should dissolution of the metal carbonates occur, Zn, Pb and other trace elements of concern would be released into the pore waters, increasing concentrations significantly if conditions do not favour their attenuation. The dissolution of metal carbonates, in addition to the oxidation of sulphides, is therefore anticipated to be a major factor controlling the mobility of the trace elements. Although typically present in minor abundance relative to the metal carbonates, with the possible exception of anglesite, results of the speciation analysis suggest that dissolution of the secondary phases will be a prevailing factor on metal(oid) mobility. The response to infiltration of the tailings by mine water can be anticipated to be similar to the reactions in the ore stockpile with rainwater infiltration. The mineralogical characterization by Skeries (2013) of sediments present in the stockpile, which has been exposed for 30 years, identified dissolution textures present in the sulphides, metal carbonates (smithsonite, cerussite) and anglesite. However, the formation of secondary rims on sulphides, such as cerussite and anglesite on galena and goethite on pyrite, should slow their oxidation and release of metal(loid)s. Goethite is the most common Fe-(oxy)hydroxide formed under alkaline conditions (Bigham 1994) and is a predominant precipitate in the ore stockpile and, along with Mn-oxide coatings, in the waste rock pile (Skeries 2013). Goethite is therefore likely to remain stable in the tailings, possibly increasing in abundance along with Mn-oxides. Through attenuation, goethite will act as an efficient immobilizer of the metal(loid)s of concern. Significant amounts of precipitated azurite, christelite, hydrozincite and possibly aurichalcite also coat the mine adit walls, signifying these phases act as major controls on Zn and Cu mobility once they migrate out of the quartz-carbonate-sulphide vein. Potential flooding of the mine workings after production ends at Prairie Creek would result in the saturation of the backfilled tailings. Exposure to oxygen would be limited, and further oxidation of remaining sulphides would be slowed. However, the oxide tailings will include non-sulphide metal-hosting minerals and their geochemical behaviour under water-saturated conditions is uncertain (Stavinga 2014). There is generally good agreement between the results of this study and previous geochemical characterization studies (MESH Environmental Inc. 2008; pHase Geochemistry 2010); however the influence of previously unknown and/or unreported secondary oxidation products on trace element mobility in the oxide zone of the Prairie Creek deposit may be greater than has been indicated. Processing from ore to tailings will remove significant sources for the metal(oid)s; however, smithsonite, and possibly anglesite may subsequently remain as the major source for many of them (particularly Zn, Cd and Pb), with smaller to greater proportions hosted by the other secondary minerals, including azurite/malachite, the arsenates, bindheimite, cinnabar, acanthite and goethite. These include significant Hg (an environmental concern) and Ag (a valuable commodity) components that could potentially be in the form of cinnabar and acanthite that is trapped within smithsonite grains. Mobilized As may be released primarily from arsenates, with Sb coming from bindheimite, Cu from anglesite, malachite, azurite and covellite, and Se from a combination of anglesite, cinnabar and arsenates. Overall, the metals released by sulphides and sulfosalts are largely controlled by the secondary oxidation products, which keep release rates relatively low under the present oxidizing, alkaline conditions. The expected near neutral to alkaline pH conditions that will occur in a mine waste setting, including paste backfill, should continue to limit dissolution as well as encourage precipitation and attenuation of the metals of concern. The stability of many of the secondary phases is highly sensitive to pH however, and a change to even slightly acidic conditions may greatly increase their dissolution along with the sulphides. Saturation of the tailings will greatly slow sulphide oxidation, but reducing conditions could increase dissolution of the secondary oxidation products. Metals are likely to re-precipitate or be attenuated by more stable phases with each change in conditions, but whether concentrations will be low enough to meet water quality guidelines is uncertain. The authors would like to thank the Geological Survey of Canada, particularly the TGI-4 program for supporting this research. Canadian Zinc Corporation provided access, in-kind support and advice. Funding was also provided by the Northern Studies Training Program, the Society of Economic Geologists, and an NSERC Discovery Grant to H.E. Jamieson. Synchrotron analysis was done at beamline X26A, National Synchrotron Light Source, Upton, New York. Shannon Shaw of pHase Geochemistry was particularly helpful. Nineteen grab samples were collected from exposures of the main quartz-carbonate-sulphide vein in the underground 930 level adit. Because the 870 level adit was not accessible in 2013, samples of mineralized vein material originating from the 870 level were collected from the historic ore stockpile on the mine site, resulting in a total of 22 samples from the underground workings. Sample locations from the underground adits are shown in Figure A1 (provided in the Supplementary material). Thirty-eight drill holes were sampled in order to provide a suite of 83 samples spatially distributed across the deposit and representative of the types of mineralization and various degrees of oxidation and alteration. A summary of the surface, adit and core samples selected for subsequent analysis by one or more of the analytical methods described in this paper are included in Table A1 (provided in the Supplementary material). Forty-three whole rock samples were analyzed for 45 elements (Ag, Al, As, B, Ba, Be, Bi, Ca, Cd, Ce, Co, Cr, Cu, Fe, Ga, In, K, La, Li, Mg, Mn, Mo, Na, Ni, P, Pb, Rb, S, Sb, Sc, Se, Sn, Sr, Ta, Te, Th, Ti, Tl, U, V, W, Y, Zn, Zr). Five separate samples were used for replicate testing. Replicate and reference material results are included in Table A2 (provided in the Supplementary material). Elements typically had an acceptable relative percent difference (RPD) between originals and replicates of within 15%. A total of 17 thin section samples were analyzed for 13 elements (Ag, As, Ca, Cd, Cu, Fe, Hg, Mg, Mn, Pb, Sb, Se, Zn) in 9 phases (excluding unknowns; smithsonite, cerussite, malachite, anglesite, sphalerite, pyrite, galena, tennantite-tetrahedrite, bournonite). LIF, PET and TAP diffracting crystals detected element wavelengths for each mineral phase. Large area and high intensity LIF and PET crystals are indicated with an L and H, respectively. Measured X-rays using the LIF crystal included: Zn Kα (smt). Measured X-rays using the LIFL crystal included: Fe Kα (smt, mlc, cer, ang, sp, gn, ttr, py, bnt); Mn Kα (smt, cer, ang); Cu Kα (smt, mlc, cer, ang, sp, gn, ttr, py, bnt); Zn Kα (smt, mlc, cer, ang, sph, gn, ttr, py, bnt). Measured X-rays using the PET crystal included: Sb Lα (smt, mlc, cer, ang, sp, gn, ttr, py, bnt); Pb Mα (cer, ang, smt, mlc, sp, gn, ttr, py, bnt); S Kα (sp, gn, ttr, py, bnt). Measured X-rays using the PETH crystal included: Pb Mα (smt, cer, ang); Ca Kα (smt, cer, ang); Cd Lα (smt, mlc, cer, ang, sp, gn, ttr, py, bnt); Sb Lα (smt, cer, ang); Ag Lα (smt, mlc, cer, ang, sp, gn, ttr, py, bnt); Hg Mα (smt, mlc, cer, ang, sp, gn, ttr, py, bnt). Measured X-rays using the TAP crystal included: Mg Kα (smt, cer, ang); As Lα (smt, mlc, cer, ang, sp, gn, ttr, py, bnt); Se Lα (smt, mlc, cer, ang). Secondary ‘working’ standards of spahlerite and tetrahedrite and a primary galena standard were analyzed as unknowns to test the instrument calibration for analysis of sulphides and sulfosalts. Due to a lack of working standards, the calibration could not be tested for the metal carbonates. The data were processed using the CITZAF V3.5 online software program for JEOL™ written by J. T. Armstrong (California Institute of Technology). The lower limit of detection (LLD) was calculated using the formula below for each spot analyzed by electron microprobe. Abbreviations are as follows: ZAF, total matrix correction factor; std, standard; unk, unknown; bkg, background; C, concentration (wt%); I, intensity ((s*nA)−1); t, count time (s); curr, current (nA) (pers. comm. B. Joy, 2013). Lower Limit of Detection (LLD) calculation for EMPA analyses. Of the 9 elements of interest detected in this study (Zn, Pb, Ag, As, Cd, Cu, Hg, Sb and Se), Hg was not considered due to a lack of standard data. Published and preferred data for the standards used (GSC-1G, GSD-1G, GSE-1G and BHVO-2G) were found on the Geological and Environmental Reference Materials database (http://georem.mpch-mainz.gwdg.de/sample_query_pref.asp) and are included in the Supplementary material Table A3. A total of 40 analyses were conducted on 5 phases (smithsonite, cerussite, anglesite, azurite, galena). Analyses were conducted using the line technique for laser ablation of in-situ minerals. Data were compiled and interpreted using Thermo Fisher Scientific PlasmaLab® software. Concentrations one order of magnitude above or below the upper and lower limit of the calibration curve were considered unreliable and thus, not used (pers. comm., D. Layton-Matthews, 2014). Eight thin sections were analyzed using Mineral Liberation Analyzer Automated Mineralogy Software at Queen's University. The software used the back-scatter electron imagery and energy dispersive X-ray analysis of an SEM to analyze each particle's shape, size, and mineralogical information. The EDS data are then compared to a user-generated Mineral Reference Library consisting of known phases and corresponding EDS spectra to classify each particle (Buckwalter-Davis 2013, and references therein). The percentage of a particular element within a thin section that is hosted by a specific mineral phase was calculated by combining the estimated average concentrations for each mineral with the modal mineralogy calculated by MLA software. Average concentrations for elements of interest are estimated using a variety of methods. The mean was simply used when an analysis gave all detected values. When an analysis gave a mix of detected values and non-detects, the maximum likelihood estimator (MLE) method was used, as described by Helsel (2005, 2012). If the sample size was small (typically <25) and/or an adequate distribution of the data (e.g. Normal, Lognormal, etc.) was unable to be determined, the nonparametric Kaplan-Meier method was used to estimate the mean (Helsel 2005, 2012). If less than three detected values and two separate detection limits were available, preventing reliable use of the Kaplan–Meier method, then simple substitution using half the value of the detection limit in place of non-detects was used. Elements which had no detected values in certain minerals were not given an estimated average. Computations were completed using Minitab® software. Correction notice: The original version was incorrect. This was due to an error in the author list: D. Layton-Matthews has now been included. 2008. Mineralogical and geochemical characterization of nonsulfide Zn–Pb mineralization at Silvermines and Galmoy (Irish Midlands). Ore Geology Reviews, 33, 168–186. 1994. Mineralogy of ochre deposits formed by sulphide oxidation. In: Jambor, J.L. & Blowes, D.W. (eds) Short Course Handbook on Environmental Geochemistry of Sulphide Mine-Wastes. Mineralogical Association of Canada, 22, 103–132. 2003. Non-sulfide Zinc mineralization in Europe: an overview. Economic Geology, 98, 715–729. 2015. The “Calamines” and the “Others”: The great family of supergene nonsulfide zinc ores. Ore Geology Reviews, 67, 208–233. 2013. Automated Mineral Analysis of Mine Waste. MSc thesis, Queen's University, Kingston, Ontario. 2010. Main Report for Developer's Assessment Report. Canadian Zinc Corporation submission to Mackenzie Valley Review Board in support of Environmental Assessment of Prairie Creek Mine EA 0809-002. 2014. Prairie Creek. [Online] Available at: http://www.canadianzinc.com/projects/prairie-creek [last accessed 9 June 2014]. 2007. Appendix 1. A review of the bedrock geology of the Nahanni River region and its context in the Northern Cordillera. In: Wright, D.R., Lemkow, D. & Harris, J.R. (eds) Mineral and Energy Resource Assessment of the Greater Nahanni Ecosystem Under Consideration for the Expansion of the Nahanni National Park Reserve, Northwest Territories. Geological Survey of Canada, Open File 5344, 327–365. 2009. Thermo-optical detection of defects and decarbonation in natural smithsonite. Physics and Chemistry of Minerals, 36, 431–438. 2016. A Comparison of in Situ Analytical Methods for Trace Element Measurement in Gold Samples from Various South African Gold Deposits. Geostandards and Geoanalytical Research, 40, 267–289. 2005. Nondetects and Data Analysis: Statistics for Censored Environmental Data. John Wiley & Sons, Inc., New Jersey. 2012. Statistics for censored environmental data using Minitab and R. 22nd edn. John Wiley and Sons, New York. 2003. Classification, genesis, and exploration guides for nonsulfide zinc deposits. Economic Geology, 98, 685–714. 2003. Secondary mineralogy and microtextures of weathered sulphides and manganoan carbonates in mine waste-rock dumps, with implications for heavy-metal fixation. American Mineralogist, 88, 1933–1942. 2005. GeoReM: A New Geochemical Database for Reference Materials and Isotopic Standards. Geostandards and Geoanalytical Research, 29, 333–338. 2005. Lavrion smithsonites: A mineralogical and mineral chemical study of their coloration. Mineral Deposit Research: Meeting the Global Challenge, 9, 983–986. 2010. A study on the distribution characteristics and existing states of cadmium in the Jinding Pb-Zn deposit, Yunnan Province, China. Chinese Journal of Geochemistry, 29, 319–325. 2007. Stream sediment geochemistry in the proposed extension to the Nahanni Park Reserve. In:Wright, D.F., Lemkow, D. & Harris, J. (eds) Mineral and Energy Resource Assessment of the Greater Nahanni Ecosystem Under Consideration for the Expansion of the Nahanni National Park Reserve, Northwest Territories. Geological Survey of Canada, Open File 5344, 75–98. 2008. Geochemical Characterization Report for the Prairie Creek Project, NWT. Project No. M004-001. 1987. The Prairie Creek Embayment and Lower Paleozoic Strata of the Southern Mackenzie Mountains. Geological Survey of Canada Memoir, 412, 1–195. 2011. Hydrogeochemical processes governing the origin, transport and fate of major and trace elements from mine wastes and mineralized rock to surface waters. Applied Geochemistry, 26, 1777–1791. 2007. Isotope geochemistry of the Prairie Creek carbonate-hosted zinc-lead-silver deposit, southern Mackenzie Mountains, Northwest Territories. In:Wright, D.F., Lemkow, D. & Harris, J. (eds) Mineral and Energy Resource Assessment of the Greater Nahanni Ecosystem Under Consideration for the Expansion of the Nahanni National Park Reserve, Northwest Territories. Geological Survey of Canada, Open File 5344, 131–176. 2015. Carbonate-hosted nonsulphide Zn-Pb mineralization of southern British Columbia, Canada. Mineralium Deposita, 50, 923–951. 2010. Geochemical characterization of paste and paste components, Prairie Creek project, NWT, Canada. Appendix 4 of Developer's Assessment Report: Canadian Zinc Corporation submission to Mackenzie Valley Review Board Environmental Assessment of Prairie Creek Mine EA 0809-002. 2005. ATHENA, ARTEMIS, HEPHAESTUS: data analysis for X-ray absorption spectroscopy using IFEFFIT. Journal of Synchrotron Radiation, 12, 537–541. 2008. Numerical simulation and a geochemical model of supergene carbonate-hosted non-sulphide zinc deposits. Ore Geology Reviews, 33, 134–151. 1992. Persistency-field Eh-pH diagrams for sulphides and their application to supergene oxidation and enrichment of sulphide ore bodies. Geochimica et Cosmochimica Acta, 56, 3133–3156. 2013. Characterization of geochemical and mineralogical controls on metal mobility in the Prairie Creek Mine area, NWT. Unpublished MSc thesis, Queen's University, Kingston, Ontario. 2014. Trace element geochemistry and metal mobility of the oxide mineralization at the Prairie Creek zinc-lead-silver deposit, NWT. Unpublished MSc thesis, Queen's University, Kingston, Ontario. 2000. Guidelines for terms related to chemical speciation and fractionation of elements. Definitions, structural aspects, and methodolical approaches. Pure Applied Chemistry, 72, 1453–1470. 1991. Trace element speciation in soils, soil extracts, and solutions. Mikrochimica Acta [Wien], II, 49–57. 2005. The speciation of arsenic in iron oxides in mine wastes from the Giant Gold Mine, N.W.T.: application of synchrotron micro-XRD and micro-XANES at the grain scale. The Canadian Mineralogist, 43, 1205–1224. 2012. A simulation experimental study on oxidative kinetics of sphalerite under hypergene condition. Chinese Journal of Geochemistry, 31, 457–464.
2019-04-24T12:42:13Z
https://geea.lyellcollection.org/content/17/1/21
PLEASE READ THE FOLLOWING TERMS AND CONDITIONS CAREFULLY BEFORE DOWNLOADING, INSTALLING OR USING THE MALWAREBYTES SOFTWARE THAT ACCOMPANIES THIS SOFTWARE LICENSE AGREEMENT, THE “SOFTWARE-AS-A-SERVICE” DELIVERY SERVICES (“SAAS SERVICES”) THAT MAY BE USED TO PROVIDE ACCESS TO SUCH SOFTWARE, OR ANY ACCOMPANYING DOCUMENTATION (COLLECTIVELY, THE "SOFTWARE"). THE TERMS AND CONDITIONS OF THIS SOFTWARE LICENSE AGREEMENT AND THE MALWAREBYTES ORDERING DOCUMENT YOU EXECUTED OR AGREED TO, AND (WHERE APPLICABLE) ANY MALWAREBYTES LICENSE KEY INFORMATION, IN EACH CASE GOVERNING YOUR LICENSE TO THE SOFTWARE (COLLECTIVELY, THE "PURCHASE RECEIPT") (THIS SOFTWARE LICENSE AGREEMENT AND THE PURCHASE RECEIPT COLLECTIVELY, THIS "AGREEMENT") ARE AN AGREEMENT BETWEEN YOU AND MALWAREBYTES INC. ("MALWAREBYTES") AND GOVERN USE OF THE SOFTWARE UNLESS YOU AND MALWAREBYTES HAVE EXECUTED A SEPARATE WRITTEN AGREEMENT GOVERNING USE OF THE SOFTWARE. "MALWAREBYTES" MEANS: (a) IF YOU ACQUIRED THE SOFTWARE IN THE UNITED STATES OR CANADA, MALWAREBYTES INC., A DELAWARE CORPORATION; AND (B) IF YOU ACQUIRED THE SOFTWARE IN ANY OTHER COUNTRY, MALWAREBYTES LIMITED, A COMPANY INCORPORATED IN IRELAND. THIS SOFTWARE LICENSE AGREEMENT CONTAINS A BINDING ARBITRATION CLAUSE AND CLASS ACTION WAIVER. IF YOU ARE RESIDENT IN THE U.S. AND A MALWAREBYTES FOR HOME CUSTOMER, THESE AFFECT YOUR RIGHTS TO RESOLVE A DISPUTE WITH MALWAREBYTES, AND YOU SHOULD READ THEM CAREFULLY. FOR EXAMPLE, EXCEPT IF YOU OPT OUT AND EXCEPT FOR CERTAIN TYPES OF DISPUTES DESCRIBED IN THE "Agreement to Arbitrate – U.S. Customers" SECTION BELOW, YOU AGREE THAT DISPUTES BETWEEN YOU AND MALWAREBYTES WILL BE RESOLVED BY BINDING, INDIVIDUAL ARBITRATION AND YOU ARE WAIVING YOUR RIGHT TO A TRIAL BY JURY OR TO PARTICIPATE AS A PLAINTIFF OR CLASS MEMBER IN ANY PURPORTED CLASS ACTION OR REPRESENTATIVE PROCEEDING. Malwarebytes is willing to license the Software to you only upon the condition that you accept all the terms contained in this Agreement. By clicking to accept where indicated below or by downloading, installing or using the Software, you have indicated that you understand this Agreement and accept all of its terms. If you are accepting the terms of this Agreement on behalf of a company or other legal entity, you represent and warrant that you have the authority to bind that company or other legal entity to the terms of this Agreement, and, in such event, "you" and "your" will refer to that company or other legal entity. If you do not accept all the terms of this Agreement, then Malwarebytes is unwilling to license the Software to you and you are prohibited from using it. If you are a Malwarebytes for Home customer and purchased the Software from Malwarebytes directly pursuant to our 60-day money-back guarantee you may be eligible to request cancellation and refund within 60 days of purchase of your new subscription. If you purchased Malwarebytes for Home from other third-party vendors, including retail stores, please contact those vendors directly for more information about their refund policies. (a) Free & Trial License. If you have obtained a free, trial or evaluation version of the Software from Malwarebytes or from a Malwarebytes authorized reseller, then conditioned upon your compliance with the terms and conditions of this Agreement, Malwarebytes grants you a non-exclusive and non-transferable license to Execute the Software solely in executable form. The foregoing license permits Execution of only such number of copies of the Software, and on such number of devices (including mobile devices), computers or virtual machines ("Devices"), as is expressly permitted by Malwarebytes with respect to your trial. If no such number of copies or Devices is specified by Malwarebytes, the foregoing license permits Execution of a single copy of the Software on a single Device. For purposes of this Agreement, "Execute" and "Execution" means to load, install, and/or run the Software locally on a single Device in order to benefit from its functionality as designed by Malwarebytes. If you purchased a license to the Software from Malwarebytes or from a Malwarebytes authorized reseller, then conditioned upon your compliance with the terms and conditions of this Agreement, Malwarebytes grants you a non-exclusive and non-transferable license to Execute the number of copies of the Software for which you have paid solely in executable form on the corresponding number of Devices owned or used by you. You agree that your purchases are not contingent on the delivery of any future functionality or features (including future availability of any Software beyond the current license term or any new releases), or dependent on any oral or written public comments made by Malwarebytes regarding future functionality or features. (a) Malwarebytes for Home – Free & Paid. If you are a Malwarebytes for Home user (or any other Malwarebytes Software intended for home use), and whether you have a free or paid license, this Section 2(a) applies. Your license permits you to use the Software solely for your personal, non-commercial purposes; the Software may not be used on any Device that is used in a business or for business purposes. Once Executed on a Device, you may transfer the Software to a different Device, provided that you uninstall and remove the Software from the first Device. You may not combine the Software with any third party script, application, hardware or tools which would cause it to run on an automated or unattended basis. You may not transfer the Software to a different user, except that once installed onto a Device, the Software may be operated by any person directly using the Device (i.e., not remotely), provided that you are responsible for each such person's operation of the Software. You may make one copy of the Software for back-up or archival purposes, or copy the Software onto the hard disk of your Device and retain the original for back-up or archival purposes. Notwithstanding the second sentence of this Section 2(a), if you have a business with no more than 10 total Devices, you may use Malwarebytes for Home Software in your business for business purposes provided that your usage shall be governed by the terms and conditions of this Agreement applicable to Malwarebytes for Business users and not the terms and conditions applicable to Home users. (“Small Business Exception”). If you use the Small Business Exception, references to Malwarebytes for Business shall be read as governing your usage of the Software. If you are a Malwarebytes for Business user, and you have a trial license, your license permits you to use the Software solely for evaluation purposes, and not for production use. You may also use our Malwarebytes Software downloaded via the business link to remediate up to five Devices every 30 days. If you are a Malwarebytes for Business user, and you have a paid license, your license permits you to use the Software solely for your internal business purposes. Other than the limited exception stated in the immediately following sentence, once Executed on a Device, you may not transfer the Software to a different Device, even if you uninstall and remove the Software from the first Device. During each year of your licensed subscription you may transfer Software that has been Executed on a Device to a different Device, provided that each of the following requirements are met: (a) the amount of Devices subject to transfer does not exceed 10% of your licensed Devices for such Software (“Transfer Allowance”); (b) only single transfers are permitted (the transferred Software cannot be transferred to a third Device in the same year); and (c) you have uninstalled and removed the Software from the first Device. Unused amounts of your Transfer Allowance will not carry over to subsequent subscription years. If you are a Malwarebytes for Business customer, and whether you have a free or paid license: (i) you may make a reasonable number of copies of the Software for back-up or archival purposes; (ii) the Software may only be used by your employees and consultants (“Authorized Users”), who have agreed to abide by the terms of this Agreement and who may only use the Software for the purposes of performing their job functions for you; (iii) you are responsible for the use of the Software by your Authorized Users (and their compliance with this Agreement); and (iv) once Executed on a Device, the Software may be operated by any Authorized User using the Device, directly or (where that person is providing support services to you with respect to that Device) via remote connection; provided that each such Device is running an authorized copy of the applicable Software. Other than for the sole purpose of assisting the management and administration of Software on Devices within a network, you may not combine the Software with any third party script, application, hardware or tools which would cause it to run on an automated or unattended basis. If you are a Malwarebytes for Teams user, your license shall be governed by the terms and conditions of this Agreement applicable to Malwarebytes for Business users; references to Malwarebytes for Business shall be read as governing your usage of the Software. Notwithstanding anything to the contrary in this Agreement, you are only eligible to use Malwarebytes for Teams if your business has no more than 25 Devices. (d) Optional Software Utilities, Beta Features and Beta Releases. From time-to-time, Malwarebytes, at its sole discretion, may make available to you optional Software, including but not limited to utilities for supporting the usage of the Malwarebytes for Home and Malwarebytes for Business Software, beta features that can be enabled within the Software, and beta releases of Software (collectively “Optional Items”). Unless a particular Optional Item includes its own separate and specific terms and conditions, this Agreement shall govern the usage of Optional Items. Conditioned upon your compliance with the terms and conditions of this Agreement, Malwarebytes grants you a non-exclusive and non-transferable license to Execute the Optional Items solely in executable form and solely for your internal business purposes of supporting the Software, and in the case of beta features and releases, for evaluation purposes. Software such as Optional Items are sometimes provided by software providers as preview releases of new features and programs, as well as quick fixes for resolving specific issues. Optional Items are not fully tested by Malwarebytes and may include significant issues. You acknowledge that Optional Items are likely to present risks associated with their use. Malwarebytes strongly recommends that you back up all of your data prior to using such type of software from any source. Notwithstanding anything to the contrary in this Agreement, Optional Items are provided "as is", and do not carry any warranties or maintenance or support; similarly, in no event shall Malwarebytes be liable for any damage arising from the use of Optional Items. You must have a license to the Software for every Device on which you operate the Software. You may run the Software on a network, provided that you have a license to the Software for each: (1) Device that the Software is Executed on; and (2) Device or user instance that can access the Software over that network that is not included in (1). You may not use on behalf of, or make the functionality of the Software available to, third parties for any purpose, such as for providing any computer repair, help desk or troubleshooting service. Except as expressly specified or permitted in this Agreement, you may not: (i) copy (except in the course of loading or installing) or modify the Software, including but not limited to adding new features or otherwise making adaptations that alter the functioning of the Software; (ii) transfer, sublicense, lease, lend, rent or otherwise distribute the Software to any third party; (iii) make the functionality of the Software available to any third party through any means, including but not limited to by uploading the Software to a network or file-sharing service or through any hosting, application services provider, service bureau, SaaS or any other type of services; or (iv) use the Software for any illegal purpose or conduct. You acknowledge and agree that portions of the Software, including but not limited to the source code and the specific design and structure of individual modules or programs, constitute or contain trade secrets of Malwarebytes and its licensors. Accordingly, you agree not to disassemble, decompile or reverse engineer the Software or Database (defined below), in whole or in part, or permit or authorize a third party to do so, except to the extent such activities are expressly permitted by law notwithstanding this prohibition. You will comply with any additional restrictions contained in your Purchase Receipt or other purchasing documentation. For Software provided through SaaS Services, Malwarebytes shall use commercially reasonable efforts to make such SaaS Services available to you, subject to downtime for scheduled or emergency maintenance. You may only use the SaaS Services in connection with your access to the Software and solely for your internal business purposes. Each copy of the Software is licensed, not sold. For purposes of this Agreement, the terms "purchase," "sell" and like terms refers to purchase or sale of a license to use the Software and not to a purchase or sale of title to or ownership of any rights or other interests in the Software. You own the media on which the Software is recorded, but you acknowledge and agree that Malwarebytes retains ownership of the Software itself and any related data or databases used by Malwarebytes or the Software (the "Database"), including all intellectual property rights therein. The Software and Database are protected by U.S. copyright law and international treaties. You will not delete or in any manner alter the copyright, trademark, and other proprietary rights notices or markings appearing on the Software as delivered to you. Malwarebytes reserves all rights in the Software and Database not expressly granted to you in this Agreement. From time to time, Malwarebytes may, but has no obligation to, provide updates to the Software. You are advised to update the Software regularly, or to set it to update automatically if that feature is available in your version of the Software. If you are a paying customer with a current subscription purchased from Malwarebytes or a Malwarebytes authorized reseller, we will make available to you the standard updates and maintenance and support that we make generally available at no additional cost to paying subscribers from time to time. Nothing in this Agreement entitles you to receive any support, maintenance, updates, upgrades, content or new versions of the Software, unless you are a paying customer with a current subscription purchased from Malwarebytes or a Malwarebytes authorized reseller. You understand and agree that your purchase is not contingent on the delivery of any future functionality or features, or dependent on any oral or written public comments made by Malwarebytes regarding future functionality or features. Malwarebytes reserves the right to designate any updates, additional content or features as requiring separate payment or purchase of a separate subscription at any time. Malwarebytes specifically reserves the right to cease providing, updating, maintaining or supporting the Software or Database at any time in its sole discretion, in accordance with the Malwarebytes Lifecycle Policy located at https://www.malwarebytes.com/support/lifecycle/. If you have entered into a separate maintenance and support or similar agreement with Malwarebytes, then Malwarebytes will provide Software maintenance and support in accordance with the terms of that agreement which is located at https://www.malwarebytes.com/eula/services-agreement/, not this Agreement. (a) Paid Subscription License Term. If you have purchased a license to the Software, then the initial term of this Agreement commences on the date specified in the Purchase Receipt or applicable purchasing documentation accompanying the Software (or if no such date is specified, the date you initially Execute a copy of the Software on a Device (regardless of the number of copies of the Software that you are permitted to use in accordance with this Agreement)), and, in each case, continues for the period of time set forth in the Purchase Receipt or applicable purchasing documentation (or, if no such date is specified, for one year). At the end of such initial term (and each renewal term thereafter, if any), subject to payment of the applicable license fees for each such renewal term, this Agreement will automatically renew for additional successive terms equal to the period of time set forth in the applicable renewal Purchase Receipt or purchasing documentation (or, if no such date is specified, for additional successive terms of one year), unless either party provides the other party with notice of nonrenewal at least 30 days prior to the end of the then-current term. IN OTHER WORDS, each subscription renews automatically until you cancel it in accordance with this Agreement. You can cancel your subscription at any time in accordance with this Agreement; see our License Renewal FAQs which can be found on our website (www.malwarebytes.com). If you are a Malwarebytes for Home customer that purchased your Software from Malwarebytes directly, fees for renewal license terms are described at the time of purchase within the transaction cart. If you are a Malwarebytes for Business customer, for renewal license terms, your license fee per Device will be increased to the then-current list price at the time of your renewal. (b) Malwarebytes for Home - Free License Term. If you have obtained a license to a free version of the Software, then your license will continue until terminated in accordance with this Agreement. (c) Malwarebytes for Business - Trial License Term. If you have obtained a trial license to the Software, then your license will continue for such time period as may be specified by Malwarebytes with respect to such trial (or, if no such period is specified, for 30 days). In addition, Malwarebytes may terminate your trial license at any time at its sole discretion. Subject to the notice of nonrenewal requirement in Section 5(a), as applicable, you may terminate the license at any time by destroying all copies of the Software in your possession or control. The license granted under this Agreement will automatically terminate, with or without notice from Malwarebytes, if you breach any term of this Agreement. If you are a Malwarebytes for Home user, and you have a paid license, if you fail to pay the applicable license fees as specified in the Purchase Receipt or applicable purchasing documentation, your existing license to the Software ends automatically and your license shall automatically convert into a free license; as such, your Software will no longer be eligible to receive automatic updates. If you are a Malwarebytes for Business customer, and you have a paid license, if you fail to pay the applicable license fees as specified in the Purchase Receipt or applicable purchasing documentation, your existing license to the Software ends automatically. If you are a Malwarebytes for Business customer, and you have a trial license, your license to the Software ends automatically at the end of the applicable trial period. If you are a Malwarebytes for Business customer, you acknowledge that upon expiration or termination of your license, the Software and any license key may automatically de-activate and you may no longer be able to access and use the Software. If you assert any patents against us or any of our other customers based on use of the Software, your license to the Software ends automatically. Upon termination or expiration of this Agreement, your rights to use the Software cease and you shall not be entitled to a refund of any pre-paid fees. Sections 3, 5(e), 7, 8, 9, 11(a), 12, 13 and 14 of this Agreement will survive any termination or expiration of this Agreement. The price payable by you is the price stated in the Purchase Receipt or applicable purchasing documentation (or, if no such price is specified, the price set out in our then-current standard published price list). Our prices are exclusive of taxes, duties, levies, tariffs, and other governmental charges (including, without limitation, VAT) (collectively, "Taxes"). If we issue an invoice to you, all invoices are payable within 30 days of the invoice date unless specified differently in the invoice or purchasing documentation. You are responsible for payment of all Taxes and any related interest and/or penalties resulting from any payments made to us, other than any taxes based on Malwarebytes' net income. All amounts are payable and charged (i) at the beginning of the subscription, when you place your order, and, (ii) because each subscription renews automatically until you cancel it in accordance with this Agreement, at the time of each renewal until you cancel. You must cancel your subscription in accordance with this Agreement before it renews to avoid the billing of the fees for the next subscription period. You will not receive a refund for the fees you already paid for your current subscription period. You can cancel your subscription at any time in accordance with this Agreement; see our License Renewal FAQs which can be found on our website (www.malwarebytes.com). Provided that you purchased the Software from Malwarebytes or a Malwarebytes authorized reseller, Malwarebytes warrants that any physical media manufactured by Malwarebytes on which the Software is distributed will be free from defects for a period of 60 days from the date of delivery of the Software to you. Your sole and exclusive remedy, and Malwarebytes' sole liability, in the event of a breach of the foregoing warranty will be that Malwarebytes will, at its option, replace any defective media returned to Malwarebytes within the warranty period or refund the money you paid for the Software. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, (a) THE LIMITED WARRANTY SET FORTH IN THIS SECTION 8 IS EXCLUSIVE AND LIEU OF ALL OTHER WARRANTIES, EXPRESS OR IMPLIED; AND (b) EXCEPT FOR THE LIMITED WARRANTY SET FORTH IN THIS SECTION 8, MALWAREBYTES DISCLAIMS ALL WARRANTIES AND CONDITIONS, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY IMPLIED WARRANTIES AND CONDITIONS OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT, AND ANY WARRANTIES AND CONDITIONS ARISING OUT OF COURSE OF DEALING OR USAGE OF TRADE. NO ADVICE OR INFORMATION, WHETHER ORAL OR WRITTEN, OBTAINED FROM MALWAREBYTES OR ELSEWHERE WILL CREATE ANY WARRANTY OR CONDITION NOT EXPRESSLY STATED IN THIS AGREEMENT. Malwarebytes does not warrant that the Software will meet your requirements, that the Software will operate in the combinations, on the operating system or in the environments that you may select for Execution, that the operation of the Software will be error-free or uninterrupted, or that all Software errors will be corrected. Malwarebytes specifically disclaims any warranty or representation as to the Software's ability to eliminate any specific malware threats or the completeness of the Database or protection modules. You are solely responsible for the data, software and other content carried on your Devices and for backing-up your data, software and other content. MALWAREBYTES' TOTAL LIABILITY TO YOU FROM ALL CAUSES OF ACTION AND UNDER ALL THEORIES OF LIABILITY WILL BE LIMITED TO AMOUNTS PAID TO MALWAREBYTES BY YOU FOR THE SOFTWARE DURING THE 12 MONTHS PRIOR TO THE EVENT GIVING RISE TO THE CLAIM. IN NO EVENT WILL MALWAREBYTES BE LIABLE TO YOU FOR ANY SPECIAL, INCIDENTAL, EXEMPLARY, PUNITIVE OR CONSEQUENTIAL DAMAGES (INCLUDING LOSS OF DATA, BUSINESS, PROFITS OR ABILITY TO EXECUTE) OR FOR THE COST OF PROCURING SUBSTITUTE PRODUCTS ARISING OUT OF OR IN CONNECTION WITH THIS AGREEMENT OR THE EXECUTION OR PERFORMANCE OF THE SOFTWARE, WHETHER SUCH LIABILITY ARISES FROM ANY CLAIM BASED UPON CONTRACT, WARRANTY, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, AND WHETHER OR NOT MALWAREBYTES HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSS OR DAMAGE. THE FOREGOING LIMITATIONS WILL SURVIVE AND APPLY EVEN IF ANY LIMITED REMEDY SPECIFIED IN THIS AGREEMENT IS FOUND TO HAVE FAILED OF ITS ESSENTIAL PURPOSE. Some jurisdictions do not allow the limitation or exclusion of liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. The Software is a "commercial item" as that term is defined in FAR 2.101, consisting of "commercial computer software" and "commercial computer software documentation," respectively, as such terms are used in FAR 12.212 and DFARS 227.7202. If the Software is being acquired by or on behalf of the U.S. Government, then, as provided in FAR 12.212 and DFARS 227.7202-1 through 227.7202-4, as applicable, the U.S. Government's rights in the Software will be only those specified in this Agreement. Export and EU Data Protection Laws. You agree to comply fully with all U.S. and other applicable export laws and regulations to ensure that neither the Software nor any technical data related thereto nor any direct product thereof are exported or re-exported directly or indirectly in violation of, or used for any purposes prohibited by, such laws and regulations. (b) EU Data Protection Laws. Personal Data may be sent to facilities hosted outside of the country where you purchased or utilizes the Software. Malwarebytes will comply with the European Economic Area data protection law regarding the collection, use, transfer, retention, and other processing of Personal Data from the European Economic Area, pursuant to the EU-US Privacy Shield and the E.U. Standard Contractual Clauses for data transfer, where applicable. Agreement to Arbitrate – U.S. Malwarebytes for Home Customers. If you are a Malwarebytes for Home customer and acquired the Software in the U.S. resident, you and Malwarebytes agree that any dispute, claim or controversy arising out of or relating to this Agreement or the breach, termination, enforcement, interpretation or validity thereof or the use of the Software (collectively, "Disputes") will be settled by binding arbitration, except that each party retains the right: (i) to bring an individual action in small claims court and (ii) to seek injunctive or other equitable relief in a court of competent jurisdiction to prevent the actual or threatened infringement, misappropriation or violation of a party's copyrights, trademarks, trade secrets, patents or other intellectual property rights (the action described in the foregoing clause (ii), an "IP Protection Action"). Without limiting the preceding sentence, you will also have the right to litigate any other Dispute if you provide Malwarebytes with written notice of your desire to do so by email to legal@malwarebytes.com within 30 days following the date you first purchase or obtain the Software (such notice, an "Arbitration Opt-out Notice"). If you don't provide Malwarebytes with an Arbitration Opt-out Notice within the 30 day period, you will be deemed to have knowingly and intentionally waived your right to litigate any Dispute except as expressly set forth in clauses (i) and (ii) above. The exclusive jurisdiction and venue of any IP Protection Action or, if you timely provide Client with an Arbitration Opt-out Notice, will be the state and federal courts located in the Northern District of California and each of the parties hereto waives any objection to jurisdiction and venue in such courts. Unless you timely provide Client with an Arbitration Opt-out Notice, you acknowledge and agree that you and Malwarebytes are each waiving the right to a trial by jury or to participate as a plaintiff or class member in any purported class action or representative proceeding. Further, unless both you and Malwarebytes otherwise agree in writing, the arbitrator may not consolidate more than one person's claims, and may not otherwise preside over any form of any class or representative proceeding. If this specific paragraph is held unenforceable, then the entirety of this Section will be deemed void. Except as provided in the preceding sentence, this Section will survive any termination of this Agreement. The arbitration will be administered by the American Arbitration Association ("AAA") in accordance with the Commercial Arbitration Rules and the Supplementary Procedures for Consumer Related Disputes (the "AAA Rules") then in effect, except as modified by this Section. (The AAA Rules are available at www.adr.org/Rules or by calling the AAA at 1-800-778-7879.) The Federal Arbitration Act will govern the interpretation and enforcement of this Section. A party who desires to initiate arbitration must provide the other party with a written Demand for Arbitration as specified in the AAA Rules. (The AAA provides a general Demand for Arbitration and a separate Demand for Arbitration for California residents). The arbitrator will be either a retired judge or an attorney licensed to practice law and will be selected by the parties from the AAA's roster of arbitrators. If the parties are unable to agree upon an arbitrator within seven days of delivery of the Demand for Arbitration, then the AAA will appoint the arbitrator in accordance with the AAA Rules. Unless you and Malwarebytes otherwise agree, the arbitration will be conducted in the county where you reside. If your claim does not exceed $10,000, then the arbitration will be conducted solely on the basis of the documents that you and Malwarebytes submit to the arbitrator, unless you request a hearing or the arbitrator determines that a hearing is necessary. If your claim exceeds $10,000, your right to a hearing will be determined by the AAA Rules. Subject to the AAA Rules, the arbitrator will have the discretion to direct a reasonable exchange of information by the parties, consistent with the expedited nature of the arbitration. The arbitrator will render an award within the time frame specified in the AAA Rules. The arbitrator's decision will include the essential findings and conclusions upon which the arbitrator based the award. Judgment on the arbitration award may be entered in any court having jurisdiction thereof. The arbitrator's award of damages must be consistent with the terms of Section 9 ("Limitation of Liability") as to the types and amounts of damages for which a party may be held liable. The arbitrator may award declaratory or injunctive relief only in favor of the claimant and only to the extent necessary to provide relief warranted by the claimant's individual claim. If you prevail in arbitration you will be entitled to an award of attorneys' fees and expenses, to the extent provided under applicable law. Malwarebytes will not seek, and hereby waives all rights it may have under applicable law to recover, attorneys' fees and expenses if it prevails in arbitration. Your responsibility to pay any AAA filing, administrative and arbitrator fees will be solely as set forth in the AAA Rules. However, if your claim for damages does not exceed $75,000, Malwarebytes will pay all such fees unless the arbitrator finds that either the substance of your claim or the relief sought in your Demand for Arbitration was frivolous or was brought for an improper purpose (as measured by the standards set forth in Federal Rule of Civil Procedure 11(b)). If you provide any ideas, suggestions, or recommendations regarding the Software or the Database ("Feedback"), Malwarebytes will be free to use, disclose, reproduce, license or otherwise distribute, and exploit such Feedback as it sees fit, entirely without obligation or restriction of any kind. By providing Feedback, you grant Malwarebytes a worldwide, perpetual, irrevocable, sublicenseable, fully-paid and royalty-free license to use and exploit in any manner such Feedback. If you are using Malwarebytes Software in a business or for business purposes, you grant Malwarebytes the right to use your trade name (and the corresponding trademark or logo) on the Malwarebytes website and marketing materials to identify you as a customer. This Agreement will be governed by and construed in accordance with the laws of the State of California, without regard to or application of conflict of laws rules or principles. The United Nations Convention on Contracts for the International Sale of Goods will not apply. If you are a U.S. resident, Section 12 ("Agreement to Arbitrate – U.S. Customers") applies. If you are not a U.S. resident, you agree that any claims or actions regarding this Agreement may be brought solely in the state of federal courts located in the Northern District of California, and you waive any right to challenge jurisdiction and venue therein. You may not assign or transfer this Agreement or any rights granted hereunder, by operation of law or otherwise, without Malwarebytes' prior written consent, and any attempt by you to do so, without such consent, will be void. Except as expressly set forth in this Agreement, the exercise by either party of any of its remedies under this Agreement will be without prejudice to its other remedies under this Agreement or otherwise. All notices or approvals required or permitted under this Agreement will be in writing and delivered by email (we will email you at the email address you provided us when you initially purchased your license), and in each instance will be deemed given upon receipt. The failure by either party to enforce any provision of this Agreement will not constitute a waiver of future enforcement of that or any other provision. Any waiver, modification or amendment of any provision of this Agreement will be effective only if in writing and signed by authorized representatives of both parties. Nothing in this Agreement shall be construed to create a partnership, joint venture or agency relationship between the parties. Neither party will have the power to bind the other or to incur obligations on the other's behalf without such other party's prior written consent. If any provision of this Agreement is held to be unenforceable or invalid, that provision will be enforced to the maximum extent possible, and the other provisions will remain in full force and effect. This Agreement is the complete and exclusive understanding and agreement between the parties regarding its subject matter, and supersedes all proposals, understandings or communications between the parties, oral or written, regarding its subject matter, unless you and Malwarebytes have executed a separate agreement. Any terms or conditions contained in your purchase order or other purchasing document that are inconsistent with or in addition to the terms and conditions of this Agreement are hereby rejected by Malwarebytes and will be deemed null. If you have any questions regarding this Agreement, you may contact Malwarebytes at support@malwarebytes.com. If you wish to send us a legal notice, please start the subject line of your email with “Attention: Legal Department”.
2019-04-23T08:54:43Z
https://www.malwarebytes.com/eula/
6.1 How long will this take? Vanimedia (a petal of the Vanipedia) has created 1,080 video clips (1 to 15 minutes long) of Śrīla Prabhupāda's audio messages and is now subtitling them in multi-languages. This is a collaborative project developed by devotees of Śrīla Prabhupāda who are coming together from all around the globe to create this unparalleled repository of his lectures and conversations subtitled in their own languages. To date 636 devotees have participated to create 35,843+ subtitles in 93 languages. As more of these videos are subtitled we facilitate people speaking various languages to see and hear from Śrīla Prabhupāda. We aspire to get to 108+ languages and 40,000 subtitles by February 2020 which commemorates the 54th anniversary of Śrīla Prabhupāda's first ever recorded speech the introduction to the Bhagavad-gītā As It Is which he recorded in New York on the 19th and 20th of February 1966. Read on and see how you can be engaged in this glorious project. For the first time in history, a powerful acarya in the authorised disciplic succession has left behind live sound recordings of his bhajans, conversations, talks and lectures. From the lotus lips of His Divine Grace A.C. Bhaktivedanta Swami Śrīla Prabhupāda, we still have the opportunity to hear directly the illuminating explanations and deep realizations of the pure devotee as he preaches and practices the science of Krsna consciousness. "It is explained in the previous verse that one has to hear glorification of the Lord from the mouth of a pure devotee. This is further explained here. The transcendental vibration from the mouth of a pure devotee is so powerful that it can revive the living entity's memory of his eternal relationship with the Supreme Personality of Godhead. In our material existence, under the influence of illusory maya, we have almost forgotten our eternal relationship with the Lord, exactly like a man sleeping very deeply who forgets his duties. In the Vedas it is said that every one of us is sleeping under the influence of maya. We must get up from this slumber and engage in the right service, for thus we can properly utilize the facility of this human form of life. As expressed in a song by Thakura Bhaktivinoda, Lord Caitanya says, jiva jaga, jiva jaga. The Lord asks every sleeping living entity to get up and engage in devotional service so that his mission in this human form of life may be fulfilled. This awakening voice comes through the mouth of a pure devotee. For a conditioned soul, therefore, it is very important to hear from the mouth of a pure devotee, who is fully surrendered to the lotus feet of the Lord without any material desire, speculative knowledge or contamination of the modes of material nature. Every one of us is kuyogi because we have engaged in the service of this material world, forgetting our eternal relationship with the Lord as His eternal loving servants. It is our duty to rise from the kuyoga platform to become suyogis, perfect mystics. The process of hearing from a pure devotee is recommended in all Vedic scriptures, especially by Lord Caitanya Mahaprabhu. One may stay in his position of life -- it does not matter what it is -- but if one hears from the mouth of a pure devotee, he gradually comes to the understanding of his relationship with the Lord and thus engages in His loving service, and his life becomes completely perfect. Therefore, this process of hearing from the mouth of a pure devotee is very important for making progress in the line of spiritual understanding." Having Śrīla Prabhupāda's audio available, with multi-language subtitles, facilitates all of his family members from around the globe to humbly and respectfully hear from him. In this way, no-one is barred from making their lives completely perfect. This project will surely prove to be one of the most uniting forces of Śrīla Prabhupāda's multi-lingual family situated in all corners of the planet. It will also help to increase the family members of Śrīla Prabhupāda by making it easy to introduce Śrīla Prabhupāda's messages to the people of all Nations. Here is a sheet of languages that we plan to have subtitled. The total number of people speaking these languages is 6,865,000,000, although bilingual speakers may be counted more than once. This list comprises the most-spoken languages of the world, and also the smaller European languages. If you would like to translate a language not listed here, please contact us and we will add it. In Dotsub, the online translating program that we use for subtitling, 485 languages are supported. Indeed there is no lack of languages to translate and Śrīla Prabhupāda instructed his GBC that printing and translation must continue, stating "this is my request." So here is a practical and effective way to satisfy Śrīla Prabhupāda's request to continue translating by subtitling his words in multiple languages. This is a collaborative service being offered by followers of His Divine Grace AC Bhaktivedanta Swami Śrīla Prabhupāda, the Founder-Acarya of the International Society of Krsna Consciousness. Via collaboration difficult tasks become very easy. By building teams of translators it is possible to achieve these goals for Śrīla Prabhupāda's pleasure and to benefit the people of this world irrespective of which language they speak. All videos are accessed via Vanimedia.org and also via YouTube. By translating all 1,080 video clips in a local language we give people an unparalleled access to Śrīla Prabhupāda's spoken words. facilitates the growth of the translating teams. trains new translators into the process. 1. We chose quotes from an existing pool of 75,000+ Vaniquotes pages. 2. We then created the audio file and placed them into the Clips to subtitle table. 3. We created the subtitles in English. 4. We uploaded the video with English subtitles into Dotsub for translating and in YouTube for viewing. 5. In Dotsub many translators can simultaneously translate into various languages. 6. When a language is completed in a Dotsub video it can then be synchronized with the YouTube video. We do this every Ekadasi. 7. We then create a Vanipedia page from the translation inserting both the audio link and the YouTube link within the page. You are familiar with translation from English into your mother language! You are not so familiar but you are confident that you could make a translation! You want to help your fellow countrymen to hear Śrīla Prabhupāda! You love to, or would love to, associate with Śrīla Prabhupāda's audio! You want to render some personal service to Śrīla Prabhupāda! You want to see Śrīla Prabhupāda's teachings distributed to every corner of the globe! You have an hour or two of spare time! You appreciate the power of collaborative efforts facilitating lots of people doing a little work to get super-excellent results! Here is a great opportunity waiting for you, no matter how much time you are able to offer. We are recruiting translators who want to be part of this historical initiative. Contact us and we will add you to our list of translators. direct association with Śrīla Prabhupāda as you translate his words! a personal offering, made by you, for the pleasure of Śrīla Prabhupāda! the joy of serving, with your multi-language family members, in a dynamic project! a team of enthused translators who stimulate you to offer your service with joy! all assistance to perform your service nicely! an opportunity for people of your language to directly hear Śrīla Prabhupāda! a never before existing multi-language repository of Śrīla Prabhupāda's recorded words! a possibility for over 75% of the people of this planet to directly hear Śrīla Prabhupāda! the opportunity for Śrīla Prabhupāda to personally (via his sound vibration) enter into millions of peoples homes and hearts! The language teams can evolve into Embassies for each language. When an enthused devotee comes forward to accomplish this service, he or she can become the Ambassador of that language, and inspire many people to participate. If you like to inspire and coordinate people then please contact us for a skype interview. On 20 February 2018 devotees will be celebrating the 52nd anniversary of Śrīla Prabhupāda speaking his Introduction to Bhagavad-gītā As It Is. Vanipedia is organizing the ideal gift to offer Śrīla Prabhupāda on this special day and requests your participation. We want to translate Śrīla Prabhupāda’s Introduction to Bhagavad-gītā As It Is into 108+ languages. Śrīla Prabhupāda is a great strategist determined to spread Krishna consciousness all over the world. His strategy is clearly centered on the translations and distribution of his books. He had full faith that his teachings could bring about a spiritual revolution in the hearts of people from all lands and thus he instructed his devotees to translate his books into all languages. Our lives brighten up and take on real meaning when we assist him in this work. We hanker to experience Śrīla Prabhupāda’s oceanic smile expressing his gratitude to us for continuing his mission. Śrīla Prabhupāda left this world before his work was finished and still today, after 4 decades, our master’s work is not completed. While dedicated devotees have translated Śrīla Prabhupāda’s Bhagavad-gītā As It Is into 59 languages, compared to the Bible’s 636 languages, we see that there is still a lot of scope for our expansion. To stimulate an increase in the number of languages, Vanipedia has initiated a groundbreaking project to translate the Introduction to Bhagavad-gītā As It Is into 108+ languages by February 2018, coinciding with the 52nd anniversary of Śrīla Prabhupāda first recording it. During the past five years we developed a streamlined process of translating and have engaged over 586 devotees from 91 different languages to subtitle 1,080 short YouTube video clips of Śrīla Prabhupāda’s lectures over 32,750 times. Recently we created 24 video clips of the 3 hours & 12 minutes recording of the Introduction to Bhagavad-gītā As It Is - each 8 minutes long. We already have 60 languages completed and 12 in process with others warming up to start. It takes only 24 hours to complete one language. We want to offer to the worldwide followers of Śrīla Prabhupāda at least 1,080 of these 1 to 10 minute video clips of Śrīla Prabhupāda's spoken words - all subtitled with at least 32 and up to 108 languages. Please support this effort generously by joining one of the language teams, or finding translators. As of September 2016, 13 languages have completed the 1,080 videos. One clip takes about an hour to translate. 108 clips will take 108 hours and 1,080 clips will take about 1,080 hours. If we manage to do them in 50 languages then these are the time frames we are looking at. 108 clips in 50 languages will take a collective amount of 5,400 hours of translating. 1080 clips in 50 languages will take a collective amount of 54,000 hours of translating. Wow, that sounds like a lot. But if we build up teams it is not much for an individual to do. Let's say we get an average of 5 translators for each language. Then, the individual "service time" to offer would only be 22 hours per person to complete the first 108 clips, and only 216 hours to complete the 1,080 clips. If we have an average of 10 translators for each language, then the individual "service time" to offer would only be 11 hours per person to complete the first 108 clips, and only 108 hours to complete the 1,080 clips. Now I am sure you see where we are going with this? That's right! The more devotees blissfully involved, the less effort that has to be made by each individual. Let's look at this scenario. An average of 20 translators per language works out to be less than 6 hours of service to complete 108 clips and 54 hours to complete the 1,080 clips. In just 9 weeks, a single language can be completed, with 1,080 clips, if we have 20 devotees offering 6 hours of service a week. Through collaboration, wonderful things are possible. Let's enthusiastically get this glorious project up and running. Our goal is to complete the full 1,080 clips (in at least 20 languages) by Śrīla Prabhupāda's Vyasa Puja festival of 2017. As of Vyasa Puja 2016 we now have 13 languages completed. If you want to help then please follow the directions here and with only 1 hour or less of service you will have translated a video into a new language. If you know of devotees who can translate the missing languages then please share this link with them. Any questions can be asked by sending us an email. In order to facilitate the large volume of work needed to achieve our goals we have opted for using an online website (Dotsub.com) to subtitle the clips into each language. This dedicated platform for multi-language translations has 495 languages waiting for Śrīla Prabhupāda's words to be translated in. Principle of translating: As this is conversational English, it does not have to be done completely literally. Rather, it is important to translate in a way that your readers will understand the messages that Śrīla Prabhupāda delivers. If your language does not have a specific way of writing Sanskrit then copy the Sanskrit words directly from the English subtitles so that you will retain all of the diacritic marks in your subtitle. When translations are confirmed to be correct the editing function can be turned off. DotSub facilitates 495 different languages. The transcription text is made up of all of the subtitle lines so by selecting any parts of the text you can jump to different places in the video. Subtitle options can be turned on or off by clicking on the Subtitles/CC icon (rectangular with CC in middle) at the bottom right of the video player (fourth icon from the right). Languages can be selected by clicking the Settings icon (small spiky wheel) at the bottom right of the video player (third icon from the right). For all assistance please contact us. We launched this project in the beginning of March 2013 and by the 27th of March Gaura Purnima we had our first video clip with 34 subtitles ready. Click here to play our first - history in the making clip. (on the video, click the "captions" button to select your language.) We now have 1080 video clips uploaded, and so far 605 translators assisted by 29 others have translated over 35,780 subtitles in 93 languages. Here you can see our results. 23 languages have completed the 1080 videos. Keep coming back to watch these results grow, or better still become part of it and help it grow. We encourage dedication and commitment from our volunteers with the clear understanding that by performing vaniseva to Śrīla Prabhupāda's teachings, in a collaborative environment, we offer to Śrīla Prabhupāda gifts that he has always shown a deep appreciation to receive. We thank all the people who are building Vanipedia and offer these recognitions. As of the 10th of June 2013 we have published our first 108 videos (10% done) in 35 days - that averages at 3.06 a day! As of the 5th of July 2013 we have published our second 108 videos (20% done) in 25 days - that averages at 4.32 a day! As of the 19th of July 2013 we have published 270 videos (25% done) in a total of only 74 days - that averages at 3.65 a day! August 29th - Srila Prabhupada's Vyasa Puja - Our multi-language subtitle project reaches 396 videos, subtitled a total 2009 times in 42 languages. 86 translators and 5 assistants have participated so far. As of the 26th of September 2013 - 48th Anniversary arrival of Srila Prabhupada in the West - we have published our fifth set of 108 videos (50% done) in 22 days - that averages at 4.91 a day! (2671 subtitles in 43 languages kindly translated by 93 devotees and assisted by 6 others) - first 50% (540 videos) created in only 143 days - that averages at 3.78 a day! October 6th 2013 - 36th Disappearance Anniversary of Srila Prabhupada - Our multi-language subtitle project has 546 videos, subtitled a total of 3090 times in 46 languages. 105 translators and 6 assistants have participated so far. November 30th 2013 - on the eve of our 6th annual Srila Prabhupada vanimarathon - Our multi-language subtitle project has 546 videos, subtitled a total of 3425 times in 49 languages. 121 translators and 6 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 16,188 times. December 31st 2013 Our multi-language subtitle project has 546 videos, subtitled a total of 4,250 times in 51 languages. 129 translators and 6 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 18,054 times. So far, one video has subtitles for 49 languages, 8 videos have subtitles for 30 or more languages, 29 videos have subtitles for 20 or more languages, 92 videos have subtitles for 15 or more languages, 154 videos have subtitles for 10 or more languages and 285 videos have subtitles for 5 or more languages. February 3rd, 2014 Our multi-language subtitle project reaches 556 videos, subtitled a total of 5,008 times in 54 languages. 141 translators and 6 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 20,085 times. So far, one video has subtitles for 53 languages, 13 videos have subtitles for 30 or more languages, 39 videos have subtitles for 20 or more languages, 111 videos have subtitles for 15 or more languages, 164 videos have subtitles for 10 or more languages and 471 videos have subtitles for 5 or more languages. March 16th, 2014 Our multi-language subtitle project reaches 640 videos, subtitled a total of 5,557 times in 55 languages. 152 translators and 6 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 25,250 times. So far, one video has subtitles for 55 languages, 4 videos have subtitles for 35 or more languages, 13 videos have subtitles for 30 or more languages, 26 videos have subtitles for 25 or more languages, 53 videos have subtitles for 20 or more languages, 132 videos have subtitles for 15 or more languages, 185 videos have subtitles for 10 or more languages and 544 videos have subtitles for 5 or more languages. May 13th, 2014 - Nrsimha Caturdasi Our multi-language subtitle project reaches 640 videos, subtitled a total of 6,665 times in 58 languages. 180 translators and 6 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 30,525 times. So far, one video has subtitles for 58 languages, 4 videos have subtitles for 40 or more languages, 11 videos have subtitles for 35 or more languages, 18 videos have subtitles for 30 or more languages, 31 videos have subtitles for 25 or more languages, 73 videos have subtitles for 20 or more languages, 152 videos have subtitles for 15 or more languages, 200 videos have subtitles for 10 or more languages and 598 videos have subtitles for 5 or more languages. July 18th, 2014 - Our multi-language subtitle project reaches 650 videos, subtitled a total of 7,559 times in 64 languages. 198 translators and 6 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 42,104 times. So far one video has subtitles for 64 languages, 2 videos have subtitles for 50+ languages, 5 videos have subtitles for 45+ languages, 9 videos have subtitles for 40+ languages, 14 videos have subtitles for 35+ languages, 27 videos have subtitles for 30+ languages, 36 videos have subtitles for 25+ languages, 87 videos have subtitles for 20+ languages, 161 videos have subtitles for 15+ languages, 255 videos have subtitles for 10+ languages and 641 videos have subtitles for 5+ languages. August 18th, 2014 - Srila Prabhupada's Vyasa Puja - Our multi-language subtitle project reaches 660 videos, subtitled a total of 8,020 times in 65 languages. 206 translators and 6 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 45,950 times. So far one video has subtitles for 65 languages, 4 videos have subtitles for 50+ languages, 8 videos have subtitles for 45+ languages, 12 videos have subtitles for 40+ languages, 17 videos have subtitles for 35+ languages, 31 videos have subtitles for 30+ languages, 39 videos have subtitles for 25+ languages, 93 videos have subtitles for 20+ languages, 163 videos have subtitles for 15+ languages, 273 videos have subtitles for 10+ languages and 650 videos have subtitles for 5+ languages. November 30th, 2014 - On the eve of our 7th annual Srila Prabhupada vanimarathon - Our multi-language subtitle project has 849 videos, subtitled a total of 10,713 times in 71 languages. 242 translators and 6 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 66,051 times. December 31st, 2014 - At the end of our 7th annual Srila Prabhupada vanimarathon Reached 14,572 categories, 129,406 pages, 2,023,892 quotes, 797,070 edits and a total of 34,977,297 page views. Our multi-language subtitle project has 1000 videos, subtitled a total of 13,395 times in 75 languages. 264 translators and 6 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 74,515 times. Total results for the Vanifun marathon of December 2014 was 101 participants subtitling 2688 videos in 51 languages. February 1st, 2015 - Lord Nityananda's Appearance Day - We completed the production of the 1080 videos with English subtitles and launched the Introduction to Bhagavad-gita 108+ languages project. May 2nd, 2015 - Nrsimha Caturdasi Our multi-language subtitle project has subtitled 1,080 video clips a total of 15,488 times in 77 languages. have participated so far. Approximately 25,000 hours of collaborative devotional service have been performed by 301 translators and 9 assistants. The VanimediaMayapur YouTube channel has been viewed 114,500 times. We launched our multi-language presence directly in Vanipedia by starting with 7009 pages in 77 languages. September 17th - 50th Anniversary of Srila Prabhupada Arriving in USA - Our multi-language subtitle project with 1080 videos has been subtitled a total of 18,280 times in 85 languages. 351 translators and 11 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 183,755 times. We are now aiming for the 51st anniversary to complete the 108 languages. March 3rd - The VanimediaMayapur YouTube channel has been viewed 300,507 times. It took 68 days to get from 250,000 to 300,000 views. We also reached 21,670 subtitles. April 28th - The VanimediaMayapur YouTube channel has been viewed 350,472 times. It took 56 days to get from 300,000 to 350,000 views. We also reached 22,325 subtitles. May 10th 2016 - Our multi-language subtitle project with 1080 videos has been subtitled a total of 22,800 times in 87 languages. 457 translators and 16 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 361,250 times. June 18th - The VanimediaMayapur YouTube channel has been viewed 400,248 times. It took 51 days to get from 350,000 to 400,000 views. We also reached 23,580 subtitles. July 13 - ISKCON's 50th birthday - Our multi-language subtitle project has reached 1080 videos, subtitled a total of 24,040 times in 87 languages. 484 translators and 14 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 425,589 times. November 4 - Srila Prabhupada's Disappearance Day - Our multi-language subtitle project has reached 1080 videos, subtitled a total of 25,603 times in 87 languages. 496 translators and 19 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 543,204 times. October 5 - First day of Kartika - Our multi-language subtitle project has reached 1080 videos, subtitled a total of 30,038 times in 90 languages. 552 translators and 20 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 1,236,909 times. November 30th, 2017 - On the eve of our 10th annual Srila Prabhupada vanimarathon - Our multi-language subtitle project has 1080 videos, subtitled a total of 31,213 times in 90 languages. 561 translators and 20 assistants have participated so far. The VanimediaMayapur YouTube channel has been viewed 1,416,822 times. Desiring to unite and expand Śrīla Prabhupāda's Multi-lingual family, we pray for the blessings of the Vaisnavas. We are gearing up to make hundreds of small audio clips with multi-language subtitles, thus facilitating people from every corner of the world, speaking as many languages as possible, to directly hear and understand Śrīla Prabhupāda's sublime message of truth and love. Tero, Finnish - Despite that it is nectarine, I sometimes feel a little sad, when I translate Prabhupada's words, because I get this longing for Prabhupada. Because I could not have met this wonderful person, to offer my obeisances, to witness his actions, when he was on this planet, to sit in the crowd of devotees. When I read about him, I want to be there with the first devotees. Missing Srila Prabhupada's presence, and all these exalted personalities I read of, sometimes makes me a little sad. Or perhaps its actually that I'm not so very Krsna conscious, that I feel that way. Patrick, German - If you have read the Lilamrita, or any other Prabhupada biography for that matter, you might have been thinking to yourself, "If only I could have been there too." But, as we know, Srila Prabhupada's Vani is non-different from him, so we are still blessed with Srila Praphupada's presence in a profound way. Nonetheless, if reading is still not enough for you, some of the inventions of modern times do come in handy here, by still providing us with the actual spoken words of Srila Prabhupada in sound. Listening to the, indeed quite numerous, audios of Srila Prabhupada is a wonderful way to connect to him and are forever instructive. Not only can we hear him speaking in his own voice, but we can hear him directly address his disciples, laugh, joke, hammer the point, express his humble devotion or simply ridicule the absurdity of material life. Of course, all these audios are in English and need to be made available in as many languages as possible. Even among mundane topics one can always check if he has understood a text by translating it into another language. The same goes here for the instruction of the pure devotee. Not only does the service of translating entail meditating on Prabhupada's word, but in the process, the translator is also rendering what Srila Prabhupada stressed so many times as the most important service - making his instructions available to others. Therefore the service of translating is simple ecstasy for anyone willing to work with language - with thought and heart - entering into the early pastimes of ISKCON by joining Srila Praphupada's young disciples all over the world, awaiting with anticipation as he delivers his personal instruction in classes, interviews and conversations. We can experience what made our now-seniors ditch everything as they heard Prabhupada's call to run forward to be part of this glorious movement. Marina, Portuguese (Brazil) - I'm so grateful that because of Srila Prabhupada's and Lord Krishna's mercy I had the chance to get in contact with Visnu Murti prabhu, who is very enthusiastic and supportive in engaging devotees in Vaniseva, and I was able to become a translator in the multi-language family project at Vanipedia. It's very inspiring to participate and see other Vaishnavas worldwide being part of this project, and spread Srila Prabhupada's message in Krishna consciousness. May Srila Prabhupada and Lord Krishna bless us so that we can keep doing our Vaniseva, and this way please Srila Prabhupada and Lord Krishna for many years. Jay Srila Prabhupada! Hare Krishna! Ananda Bhakti Devi Dasi, Spanish - Vaniseva came to my life just at the moment that I asked Krsna for understanding and help. Now there are so many wonderful Vaishnavas who are supporting me by being enthusiastic, sincere, real and helpful. Listening to Srila Prabhupada gives us a real connection to his vani and in this way we are learning to act. All our doubts disappear in his association. I would like to spend all day listening Srila Prabhupada and translating his words so that others can read his message, and feel connected with the seva as well. Thanks to all Vaisnavas involved. I feel blessed and protected in the association of Srila Prabhupada. Jaya Vaniseva, Jaya Srila Prabhupada. Padmavati Visaka Devi Dasi, Spanish - I just want to cooperate in the best way to build Srila Prabhupada's Vani-temple soon, so that his teachings can continue touching hearts and changing lives. I feel honored to be able to put a small brick to build his Vani-temple. This is my offering of love and sincere gratitude to him. Thank you for giving me this privilege. Vesna, Slovenian - By sorting out this text to write I started to realize, how actually my life is starting to be sorted out. The word "closer" is the best one for describing these feelings. I have started to have inner motivation. So many simple truths are written within Prabhupada's texts. Another good thing is to be able to learn some meanings of Sanskrit words. Hrsikesa is just one of them - Krsna is the master of my senses. I'll keep translating to continue this amazing learning experience. Please join us in service. Bahulasva Dasa, Swedish - Srila Prabhupada can, from just a small video clip of only 5-6 minutes, convince anybody to take up Krishna consciousness from any kind of angle. He is simply so convincing, not only by words, but by his relaxed way and gestures. He has such a genuine appearance, so trustful. Thank you again and again for giving me his association by this service. Tatjana, Macedonian - When my best well-wisher Akrura Prabhu encouraged me to translate videos for Vanipedia, I was very very scared how I will answer that puzzle and how I will handle this special spiritual translation. It seemed to be very difficult, But it wasn't. It was real transcendental pleasure with so much joy and now it keeps on going to be even more interesting. When I shared that to my Guru Maharaj, my new translation experience, He was so happy and satisfied. Wow, That is purest Prabhupada's Mercy! Vanipedia is one of the best and smartest projects, I've ever seen on the Internet. With great leadership from Visnu Murti Prabhu and so many dedicated devotees competing with each other to translate as much as they can. Splendid! Thank you so much for the mercy to translate and do something for Prabhupada's Mission! Ratnavali Devi Dasi, Portuguese (Brazil) - In the last 7 days I committed myself to translate 8 videos a day to close an important cycle, both for me and for the Vanipedia project. On Sunday (yesterday) I managed to complete this goal. Many other translations will come but this was a very important milestone for me. In these seven days I could advance as never before, in understanding, and in devotion to the Supreme Personality of Godhead, Krishna. I attribute this progress to the still living presence of Srila Prabhupada through these recordings. Just as the maha-mantra, while chanted, brings Krishna to our tongues, Prabhupada's chanting nowadays brings the presence of Prabhupada to our reality. While Prabhupada's books are meant for our intelligence, his sound vibration - the sound of Prabhupada's voice is meant for our hearts. Prabhupada believed that his books were his greatest contribution to the Krishna consciousness movement and the sound of his voice is also such a great mercy. Even if you don't understand English, the words, they will reach your heart, but they will not reach your intelligence. In the spiritual method that Prabhupada has taught us it is important to cultivate both the intelligence and the heart. Thus his books and his spoken words are a perfect combination and translating his words gives people in all languages the impetus to hear! Vanipedia has brought to people like me, and you, the possibility to listen and understand, as if Prabhupada would still be alive, the possibility to realize how powerful his words are. So far, Vanipedia project has launched 82 languages, a total of 17,224 translations. That's amazing! Besides that, the Vanipedia project is allowing people who live away from the temple or have family commitments, as myself, to practice bhakti-yoga inside our homes, receiving some many wonderful instructions from His Divine Grace Srila Prabhupada. Now, nobody can give excuses that "I don't have time" or that "I live far away from the temple" or that "I don't have enough money to buy the books" Almost everything we need is inside Vanipedia. So, do your duty of vaniseva, chant Hare Krishna Hare Krishna Krishna Krishna Hare Hare Hare Rama Hare Rama Rama Rama Hare Hare, and be happy! Ronald, Dutch - I'm feeling blessed that this mission came to me. At first I was disappointed by my Radhadesh visit in June, I was seeking contact with devotees but they were all so busy... But in the morning lecture by Tattvavit das there came a question from a devotee and the answer was directly applicable to my situation. He even looked me directly into my eyes. And later that morning I met you and your wife, it was a blessed day! Bhakta Lawesh - Tansen, Nepal Vanipedia team - HG Vishnu Murti Prabhuji has asked me to share some of the things that we are undergoing to do the translation works for Uniting Srila Prabhupada's Multi-lingual family. With translations of five languages undertaken here in Tansen, Nepal we have lots of challenges to turn Tansen into an ecstatic Vanipedia base. We are trying to do the translation work of Maithili, Newari, Magar, Achami and Nepali. Here in Nepal as a whole, people are not so much educated in computer skills and English language. Although we are based in Tansen, we have devotees performing vaniseva from all over the country. In most places there is no Internet connection, so we have to print the materials to translate here in Tansen, and send it to the next place Butwal from where there is a regular bus to the place where the transcriptions reaches the devotee who will gradually do the translation work and then send it back to us whenever the opportunity comes. Some devotees come forward expressing desire that they would like to help and then they just disappear and there is no way that we can contact them as many do not use most of the communication facilities that is generally used. Tansen is a small place and we have a congregation of around 50 devotees which is gradually building up and myself the only brahmacari working actively to serve all these devotees. So there is always some kind of engagements, and programs each day so it is not possible for us to have concentrated attention to the translations. Our challenges while doing the translation work are many, and I feel much pain that I am not able to participate to the degree this project deserves attention. We have two devotees who are inside Tansen doing the translation work and what they do is translate taking help of those who are better acquainted with the language. They write down on paper the whole thing and when finished, give me the paper, and then I give that to our typist devotee, who is also facing challenges. He has his college and with long hours of study likes to take nice rest also, I just cannot push him to do the work quickly. Whenever he is comfortable and feels like typing will inform me of his availability and then I pass my laptop to him and he gradually does the work of typing the whole thing in Nepali text into MS word. Then making additional corrections I change that into unicode through a website which we found after much endeavor. Another challenge we face is whenever the weather gets worse, even a light rain, the electricity is gone and with it the internet also, thus leaving us completely stranded with the service of translation. Only after some quality struggles are our services completed. Still by the merciful blessings of Srila Prabhupada, HH Mahavishnu Maharaja, HG Vishnu Murti Prabhuji and all the Vaishnava devotees we are somehow or other able to participate in this great mission of compassion and all the translations are gradually unfolding. Although Mahavishnu Maharaja once mentioned to me that Tansen, Nepal has become an ecstatic Vanipedia base with 5 languages in hand, to fulfill his words to it's truest sense we will have to be more enthusiastic and determined. We do aspire to do so sincerely as there is no greater joy than to share Srila Prabhupada's words with our fellow countrymen. I hope that whatever we tried to share herein will enthuse devotees to take this great mission more seriously specially those who have all the facilities in hand to do so. Do not be discouraged if there is some small obstacles, Krsna will surely help you as he is helping us. Madhaviya Devi Dasi, Portuguese (Brazil) - Something amazing happened yesterday as I was translating 0196. This class contains the essence of what Srila Prabhupada spoke to me in that dream. It happened exactly on the same day that I told you about my dream. Thank you so much for giving me the opportunity to regain that message and to clarify it more and more inside my heart and mind. Vani Seva Ki Jay! Prabhupada Vani Devi Dasi, Kannada - Regarding my experience, it is wonderful because translating requires lots of thinking, searching for proper words, making sentences in different ways and seeing which one is more appropriate. As a result I must dive deeply into Śrīla Prabhupāda's words which makes me strikingly understand just how important it is for me to be Krishna conscious in this human life. As Śrīla Prabhupāda explains in one of the introduction to Bhagavad-gītā clips "we have to mold all our activities of life in such a way that we can remember Krishna always," I think through this service, we can mold our life in a way that we always think of what Śrīla Prabhupāda has said, and therefore it is the highest form of devotional service. Haripriya Dasa, Nepali - I just finished translating my first video. The experience was superb prabhu. I cannot express in words this blissful feeling. I started little late today as I was tired when I got home from work. It took me little longer as I am writing in Nepali after a long time. Honestly speaking, I haven't written anything in Nepali in 7 years after coming to USA. So, I also got opportunity to write in my own language. It helped me to refresh my memory. Lawesh, Magar - It is our great fortune to be able to participate in this mission of translating Śrīla Prabhupāda words into Magar language for the first time ever in the history of this particular time scale of Kali yuga. I can see how the devotees who are involved in it are becoming spiritually rejuvenated just by their tiny input and contribution. I was longing in my heart to somehow come closer to the teachings of Prabhupāda and with this service I feel the reciprocation of the Lord by allowing us to render some service and entitle us for the Lord's and Prabhupāda's mercy and thus allowing the message of Prabhupāda to penetrate deeper and deeper through ours and others' hearts. I feel the more we will try to dedicate and give our heart unto this service, we will be able to obtain the cherished goal of really internalizing the Vani of Prabhupāda forever in our hearts. Thus I feel greatly thankful to HH Mahavishnu Swami Maharaja for so kindly engaging us in this service. Although it seems a simple task, there were many impediments that came our way in the beginning of translating, due to our being totally new, but Visnu Murti Prabhu patiently tolerated, always encouraging us so kindly which is also very touching to the heart and encouraging us to work harder. To present this translation, it was mainly done by Laxmi mataji ([email protected]), one of our student devotees who translated the whole thing into Magar and the text was typed by Sundar Prabhu our next student devotee who is also staying with me in our student centre here at Tansen. These were the main people who did this work, besides them other devotees from congregation helped us by providing us with house to come together, giving us advice and internet facilities. Overall, the service has given us encouragement and joy to serve even more enthusiastically. Dauji Krsna Dasa, Croatian - Death is so obvious here in Mayapur. It is good to sometime remember about our body's final destination. Thank you kindly for engaging me in this "happy feeling of being alive service" to HDG Śrīla Prabhupāda. Such a great knowledge we have, now we just have to realize. Paulius, Lithuanian - I just said to myself that I should just sit and translate daily, no matter how much, maybe just a few words but daily, and now I like this routine. And usually every time when I start to translate I spend more time than I was planning, and I stay inspired by Śrīla Prabhupāda's words. Amala Prema Devi Dasi, Turkish - Hare Krsna, Today I have completed 1041, 1042, 1043, 1044, 1045. - I haven't been a warrior today. But exactly those 5 videos were answers to some debate we had in the morning after our BG study and breakfast at the center. Tomorrow I will get one Krsna friend to hear those five because he needs to hear those as an answer to his own ideas. You cannot believe how interesting all five of them relates to what we had been discussing. Śrīla Prabhupāda is always such an authorized, equipoised and dedicated guide to Krsna and şastra. He speaks the best of words, answers in the best way, the best to debate and the best to serve and give. Gauri Gopika Devi Dasi, Portuguese (Brazil) - Translating these lectures on the 1966 Introduction to the Bhagavad-gītā has been a very important experience in my spiritual life. This service is so simple and yet so incredible, and even as I write this I am still trying to realize why. Of course, it is direct association with Śrīla Prabhupāda, which is in itself enough to make it extraordinary. But there's definitely something more to it. The fact that I did it during a period of only 8 days also increases its impact; It's just like a book distribution marathon, or any concentrated effort in devotional service, and yet it has a unique taste to it. The more we translate, the closer we feel to Śrīla Prabhupāda; and the more we feel his guiding hand and hear him, the more he becomes alive through his teachings. Just like through his books and all of his vani. But it is not just him through his Bhagavad-gītā that we are hearing; it is Śrīla Prabhupāda with his heavy accent, his strong and melodious voice - in 1966 - freshly preaching to us the very basics of what would come to be a great spiritual revolution - all of this in such a unique and personal way. There's a very special feeling to it, some special intimacy that we are mercifully given through this service. It is a very personal service to Śrīla Prabhupāda. With each video, I was living with him and he was talking to me, preaching to me, inviting me to take part in the Krsna consciousness movement and to bring others to it, in such a specially personal way. Maybe because it felt like I was assisting him to write, as if he was letting me take part in the very foundations of ISKCON. He was dictating, and I was simply his hand, writing it in another language, expanding it to another part of this planet so it reaches more and more souls. He wants all of us to assist him in building this society where the whole world can find shelter. Only now I realize that translating these videos added enormously to a very important foundation in my heart, that of my unique relationship with the Founder-Acarya of ISKCON. Any distance suggested by this title, just like the one we may feel by not having seen him physically, is definitely vanquished by this service. We understand at once the Founder-Acarya and the loving father, the spiritual master that established an institution from where he firmly but very compassionately and, above all, very personally comes to each one of us and begs, "please perfect your life, please read this literature, please chant Hare Krsna and be happy, and please kindly help me spread this message to everyone else." These are the foundations of his society for Krsna consciousness, and now it is up to us to constantly choose to serve its purpose, to serve Śrīla Prabhupāda's desire. I could never actually realize why those first disciples were so crazy about him, but by the mercy of some devotees, through this service I guess I am getting some feeling of it. We are given a glimpse of Śrīla Prabhupāda's heart as he lets us in to become his writing-hand in all the languages of the world; it also gives us this sense of connection to all the other devotees, and not only those doing this service. It is impossible not to become overwhelmed with gratitude for such a magnanimous, pure-hearted being who truly loves us. We just feel like giving him everything in an attempt to see him smile. And he must be smiling with so much pleasure to see this project growing. Thank you for kindly and mercifully engaging us in this service, all glories to Śrīla Prabhupāda, his devotees and Vanipedia's beautiful effort to subtitle his Introduction to Bhagavad-gītā in 108+ languages. Alakananda Devi Dasi, Serbian - First video clip, 1057, is ready :o) It is really SUPER easy to do and at the same time very ecstatic. Timur, Russian - There are situations we would call a magic opportunity. And such situations can be provided not by ordinary people, but those who know magic, the magicians. Often devotional service is also described as an opportunity, because it simply does not belong in this material universe. So one needs somebody to connect you with this energy which is only possible by mercy of guru and the devotees of Lord Krishna. This Vanipedia subtitles translation project was definitely the opportunity for me. From Visnu Murti prabhu I received my first japa-mala 5 years ago. I also received full trust then from Visnu Murti, because the subtitles translating gradually has made my English better. Participating in this project is giving a sense of responsibility and connection, which means two in one: the chance of doing bajana-kriya and yoga. But on top of all this is the confidential associating with the teachings of Śrīla Prabhupāda. One can easily imagine the good fortune to be his personal translator and companion. This is a chance for very wonderful association with Śrīla Prabhupāda's teachings and connection with the devotees wherever you go. Just open the link, contribute and relish, get purified, get better and do more... And stay at the spiritual platform! I want to thank everyone who takes even on the smallest effort in spreading the teachings of Lord Caitanya and those who provide the magic connection! Vanipedia translating project ki, Jaya! Śrīla Prabhupāda ki, Jaya! Everyone who feels the capacity I kindheartedly invite to join in! Agnieta, Lithuanian - Your words about the Vani temple of sacred words are just astonishing and inspiring :) This is exactly what this project is doing and I feel blessed being a part of it. Thank you again and again for this opportunity - to hear and read Śrīla Prabhupāda's words closer, more deeply. Yamuna Jivani Devi Dasi, Lithuanian - I have translated 0371, 0372 in Lithuanian. Feeling very happy and inspired, got them just on time, I mean in my life situation. Thank you for this service and for the light of Śrīla Prabhupāda's teachings :) Sorry for doing so slow , but still continuing and feeling that I am getting much more than I'm giving. Dauji Krsna Dasa, Croatian - The best direct service of relishing and associating with dearest Śrīla Prabhupāda. Instant deliverance from the clutches of illusion. Sive, Xhosa and Zulu - Who-ever thought, sitting down and typing just a few subtitles for a few minutes a day would be performing devotional service? Certainly not me. That is, until i got in touch with the Vanipedia warriors! I've done just a few video translations into Xhosa and Zulu, and all I can really think is that with every click of the mouse and tap of the keypad, I'm getting the most intimate association of Śrīla Prabhupāda, as well as doing what he did, what His spiritual master told him to do, what Lord Caitanya wants us all to do, preach the message of Krsna Consciousness to the world! Doesn't matter where we are, how old, or in my case how (un)intelligent, but we can all perform this great service and get some blessings from Śrīla Prabhupāda. Even if he's not right there physically, Prabhupāda is forever alive through his teachings and words. Words that are a one way ticket back home to the spiritual world if we're willing to listen to them. After all, "Vani is more important than Vapuh" right? Śrīla Prabhupāda ki Jaya! Ananda Prema Devi Dasi, Turkish - Somehow or other we have all fallen into the clutches of our daily routines, chores and own struggles of material existence, and in the same way we have somehow or other come across a Krishna Bhakta, a book from Śrīla Prabhupāda perhaps, or even Śrīla Prabhupāda himself or his vani, one of which is this glorious Vanimedia. If you have been so fortunate enough as I am to read and hear from Śrīla Prabhupāda and get connected to the Vanimedia, it is one of the best ways to begin your pursuits of human life. As neophyte devotees, we know how important it is to read Śrīla Prabhupāda's books constantly, but hearing or translating a Vanimedia video is the shortcut to the whole philosophy of Krishna Consciousness. It not only allows you to learn the bhakti teachings and spiritual way of life in the simplest, quickest and most practical way but also awakens the desire to learn more from Śrīla Prabhupāda and read his books on the relevant subject matters. It also keeps you connected to your practice. This is how I have personally got to know Śrīla Prabhupāda, who he is and what his message is. Nevertheless, Śrīla Prabhupāda's Vanimedia is for everyone regardless of being a bhakti practitioner or not because Śrīla Prabhupāda has got the answers to the questions of every human seeker. Vanimedia clips are short enough to stay enthused to the teachings of Śrīla Prabhupāda, to the point and palatable enough to make you ask for more. During the hustle and bustle of my daily flow, translating for Vanimedia is my shelter to stay connected to my bhakti practice and to Śrīla Prabhupāda. I feel that my contribution is insignificant and I am not realized in any way to speak and understand such a high philosophy, but Śrīla Prabhupāda makes it possible for me to understand in his unique way of instructing and to repeat his words as I translate because Śrīla Prabhupāda is a realized person. However, I have the experience that when the time is mature, his words that we have been sucking in as Vani translators flow out from us and reach interested spiritual seekers we meet in person or through Vanimedia anytime in this cyber world. I come to not only know the philosophy but realize it in daily life if my Vani service is regular enough. Kindly hear Prabhupada's vani in just a click to Youtube, FB or your browser and enjoy the whole range of vani offerings from one of the world's greatest spiritual leaders, Śrīla Prabhupāda and meet him in person. Adre, Lithuania - I have just started my translating service a little bit more than a week ago, but it feels like I've been doing it all the time! I didn't expect this service would bring me so much joy. If possible, I try to do one video a day, and as Visnu Murti told me, ‘a video a day keeps maya away’ :-) This experience is so enriching… I am just on my first steps to bhakti, and all this service helps me a lot to go forward. Everyday listening to Śrīla Prabhupāda's words I feel more and more connected to his teachings, personality, and all the Krsna Consciousness philosophy. I am also a person who needs to analyze and really understand things before I do or believe them – and Śrīla Prabhupāda is answering all the questions and removing the doubts. So through this service I grow and learn many things. Moreover, I realized I started quoting Śrīla Prabhupāda to my friends or family, like “you know, I’m now translating the videos of one of the greatest spiritual teachers of this last century, and about this topic he said that…” This service is simple, but also very responsible. You can do it from whenever you are. I find the mission of “Vanimedia” very beautiful and important, and I’m really proud and happy to be the part of such a great project. Kaan, Turkish - I really enjoy translating Śrīla Prabhupāda's speeches, it makes me feel like I am relieved from the materialistic world. Nityangi Devi Dasi, Persian - This is not just an ordinary translating. For me it was the beginning of a very beautiful spiritual journey with the other dear devotees in our group, along the path of Krishna consciousness. During this journey, Śrīla Prabhupāda always shows us how to use our senses in seva, how to care for each other, how to be really a servant... He teaches us, that with enthusiam there is no hardness or exhaustion along this path. Everything is nice and clear, because He is always there to protect us. Anytime something happened to me, or I had a misunderstanding, there was the file to light up the way for me. I realised that; these are not just files, words, texts, videos... All is Śrīla Prabhupāda himself and his shakti. I will never forget the marathon that we had for the last moments before beginning of 2014, Siddhesvari dasi led us enthusiastically to translate as much as we could, and we grabbed the files from each other... Every moment of that journey was magnificent... Jaya Śrīla Prabhupāda! Siddhesvari Devi Dasi, Persian - By the mercy of Śrīla Prabhupāda, your amazing support and the devotees enthusiasm to please Śrīla Prabhupāda - hereby we happily announce we finished translating the first 640 clips. This seva had a great sub-product for us. Most of the group were not in the same city or even same country but with the grace of this seva we have now got a very inspiring and close virtual sanga now that only started because of Vani seva! Manohari Devi Dasi, Bulgarian - Of course on the spiritual path everything is so designed, that the topics are always corresponding with the burning thoughts and feelings at the moment. You discuss something with your friend, and here it is, the same topic, in the next translated video, or in the next verse of "Charitamrita". Such "coincidences" are not surprising after certain time, as mataji noted. You don't even mention it. Anyway, it is always pleasing to get this feedback again and again. Yet the real answer I've found within my heart is, that actually my heart is so dull and stony, that it does not feel any ecstasy to share it. The same is with the chanting. No tears, no divine rapture, nothing. This iron heart is so rusty, that it did not melt, alas... Yet what to do? What to do, but continue doing it over and over, hoping that maybe one day, by the mercy, this desert will turn into garden... Still, I am reminded of a story, that my Gurudev was telling about Śrīla Prabhupāda and which gives me some hope. Once a personal attendant of Śrīla Prabhupāda told him: "Prabhupāda, wherever we go, devotees are so happy to receive you. They are singing and dancing, even fainting with ecstasy, offering prayers and flowers, throwing themselves in your feet, shouting: "Jaya Prabhupāda! Jaya Prabhupāda!" - this is what I see wherever we go together. And poor me, I am not feeling such ecstasy. Am I so hopeless?! It means, I don't love you at all, Prabhupāda..." Śrīla Prabhupāda did not say a word. The day was passing and he still would not gave his answer. The servant became very apprehensive, maybe he had offended him greatly? At the end of the day he was serving him the diner and Prabhupāda was totally silent and very grave. After the diner was finished, the prabhu was siting at the feet of Prabhupada, doing some massage. Then all of a sudden Prabhupāda asked: "Do you like to serve me?" The prabhu answered: "Of course, Prabhupāda! Of course I like to serve you!" "You know," Prabhupāda said, "this is love." Varaha Dasa, Yoruba - I'm glad am still able to continue with the translations. Service to Śrīla Prabhupāda's vani include preaching, book distribution, printing of books, and the continuation of book, or audio-visual translations. I was connected through book distribution for many years and now my connection is through these video translations. Thank you very much for re-animating my Prabhupāda connection. So many benedictions are coming since I started this service: I have got hectares of land donated, a house property donated with other plots of land, devotees coming from other yatras, Nigeria and South Africa to assist and further inspire us in our projects. By Śrīla Prabhupāda's mercy his service is expanding just as I am making effort to increase the translations. Thanks to Vanimedia. Mahabhava Svarupa Devi Dasi, Bulgarian - Thank you for sharing your exchange with the Russian mataji. I have had the same realizations. Whatever questions bother me, I have found the perfect answers in the videos I am translating. It seems like I am translating exactly what I should. This is so great and comforting. That's why we get addicted to Śrīla Prabhupāda's words. Jaya Prabhupāda and Jaya Sri Nrisimha Dev! Sofiya, Russian - Yes, it is a miracle, what is going on through the translations. Every time I start a new video, I get just the answers to the questions and issues that directly or indirectly occupy me right now. I do not even find it amazing any more. I just realize that this is how transcendental communication works. Tulasi Maharani Devi Dasi, Mongolian - I am really happy that somehow I came in contact with the opportunity to take part in this fantastic project. It's a GREAT chance to obtain the mercy of Śrīla Prabhupāda - EVERYDAY through the most suitable and easiest way which we call the “Cyber World” for ourselves and to my native speakers - Mongolians. I really would like express my gratitude from bottom of my heart for your preaching and devotees supporting activities via Vanipedia project. Indeed, it`s the exclusive historical sublime action to please Srila Prabhupada! Uniting the WHOLE WORLD. All Glories to you and all those devotees and those who will be involved in future. Thank you for including and inspiring me (dukha) to keep going. I was hoping to translate videos in Korean and had been started studying this language due to similarity of our mother tongue (as grammatical structure ). But somehow it didn't work out. Anyway pretty sure someone will take this opportunity. Today I made only one video 0173rd on the last day of the year. It was about spreading Bhagavata Dharma as you know. Really, we need to give and preach Glories movement of Sri Chaitanya Mahaprabhu. Eva nasty eva nasty :). Zeyneb, Turkish - Translating the videos gives me a feeling of being in touch with the devotees who were actually listening to the speeches so many years ago. Hearing Śrīla Prabhupāda is auspicious and makes me feel good and inspired. While translating, I get a chance to think deeper into what Śrīla Prabhupāda says and this experience is more than just translating, it takes me deeper into devotional service. Thank you all for this chance. Micha, Arabic - When I translate I feel it is amazing to be hearing Śrīla Prabhupāda's voice. It is as if I am present in his lectures. I feel so blessed to be able to do this seva and to hear his voice and hear how he explains everything in detail. The way that he speaks is very directed to the soul. These videos are amazing and a great seva so again thank you so much. All glories to Śrīla Prabhupāda. Sarvabhauma Dasa, Bulgarian - I am really happy that I have the opportunity to share some words of experience and appreciation from my participation in this wonderful project. It has been an honor and delight to come in contact with this service opportunity and become a member of this united team of translators from all over the world with the mission to represent the immortal vani of His Divine Grace to as many languages as possible. I am really excited that my fellow countrymen and women will get in touch with the instructions and voice of Śrīla Prabhupāda which, if taken seriously, can deliver them from all kinds of problems and anxieties in life. The submissive oral reception of the sweet message of Lord Krishna transmitted by His pure devotee is all we need to become happy in this world and be transferred to the spiritual kingdom at the time of death. So I can see how great and important is this opportunity, firstly for myself to purify my own consciousness by the words emanating from the lotus mouth of the spiritual master and then to pass on those same words, as they are, in my native language for those who are eager to hear and understand them. I would like to thank profusely to the group of devotees at VaniMedia who are putting forward so much effort and bright ideas to create opportunities for us to get involved in service to the instructions of Srila Prabhupada, the greatest spiritual master in all history. It is a pleasure to associate with and assist all the devotees in their glorious service. May the fame of Śrīla Prabhupāda be spread all over the world. Mahabhava Svarupa Devi Dasi, Bulgarian - I have been praying to Krishna to engage me in some service and make some use of me and by the mercy of Śrīla Prabhupāda and his sincere and dedicated devotees I got the opportunity to participate in this wonderful project. Very soon I realized that I have been missing this kind of association with Śrīla Prabhupāda. Listening to his words, simple and yet charged with such deep realizations and knowledge and feeling his warmth, caring, compassion and generosity are truly life -and heart- changing. I thought how lucky I am to be able to perform this inspiring and lovely service. Every time before starting the translation I pray to Krishna and Śrīla Prabhupāda to give me intelligence so I can do my best. It is a huge responsibility to translate the words of a pure devotee and so rewarding at the same time. First I listen to the video, then I translate and in the end I listen to the video again. Just by his intonation Śrīla Prabhupāda stresses and enriches so much the meaning of the words. With just a few sentences he has the power to destroy long-lasting doubts and insecurities. When we engage our mind by listening to his words it surely becomes our best friend. I feel spiritually safe and at home. Rohit, French - I just finished the December marathon subtitling into French. It was a very ecstatic month despite the fact I couldn't translate as much as I wanted to, but still this will remain as a very beautiful memory and experience. Now I can feel why this month was very special to Śrīla Prabhupāda. Suvilasi Madhavi Devi Dasi, Hindi - I feel very fortunate to be able to hear from Śrīla Prabhupāda on so many topics, or is it just One? :-) I never tire of hearing him. He talks to us through his lectures and like Krishna he too is unlimited - in knowledge, in compassion - in everything. I hope and pray that Krishna willing I can continue to do this minuscule service for dear Śrīla Prabhupāda till he so desires. Sumangala Laksmi Devi Dasi, Tamil - I started from compiling with Visnu Murti Prabhu’s team, they then taught me to create pages and compile at the same time. As I was involved in this seva I had the opportunity to read the books and lectures given by Śrīla Prabhupāda in different parts of the world. It was very inspiring that I never stopped doing it even for a day. I realized one thing. The specialty of Śrīla Prabhupāda are all the valuable treasures (Books) that he has given us. When distributed it can multiply and multiply into many millions. It is ever lasting, even after us the next generation will benefit. It’s very blissful. When I was given the opportunity to translate I felt like directly associating with Śrīla Prabhupāda. I felt it as a gift. And it is very profound. For someone aspiring to become his servant of servant, there is no greater gift than this. Prahlada Bhakta Dasa, Akan - Kindly permit to share some thoughts about the multi language project. I had the fortune of being invited as a translator in July, 2013 when Visnu Murti Prabhu kindly invited me to do a 3 –week vani retreat at Radhadesh. To say the least, this retreat has made a significant impact on my life as an aspiring devotee. Somehow, I had already read (from Śrīla Prabhupāda's teachings) some of the texts that I am currently translating. However, I must admit that, the direct translations of the videos have offered me a rare opportunity to reflect rather deeply on what Śrīla Prabhupāda said......or, meant. It is really amazing, how blissful it is to associate with Śrīla Prabhupāda via this means (of translating). I depart for my home country, Ghana with a deep feeling of separation from my new friends…Saha, Rishab, Visnu Murti (and his good wife) and very recently, Linda. My prayer to Śrīla Prabhupāda and to this wonderful team of VANI WARRIORS is that They heal my grieving heart from this feeling of separation by offering me further opportunities to serve them even, in separation. I am extremely fortunate to be able to associate with Śrīla Prabhupāda at such a profound level. Many thanks to the vani team for giving me the opportunity to do this seva. Gunamani Devi Dasi, Danish - First listen, then understand, then explain it to others. This is the essence of this service, which came to this fallen soul, praying for some association. A wonderful opportunity to associate with Śrīla Prabhupāda and become enlivened in Krishna consciousness. Now the people of my country - friends, family, neighbors, colleagues can hear the voice of the pure devotee, the voice which reveals his mood, humor and kindness even when he uses words as "rascal", and they can read the subtitles and become Krishna conscious. Instead of a tombstone, my children and grandchildren, whom I love very much, will be able to hear of Krishna, if they want to remember me, when I am gone. It is like putting your money in the most safe bank account in the world. Even if the world of electricity and oil collapses as could be expected, there is still no loss or diminution, since the result for this soul is everlasting and on top it is joyfully performed. I pray for the blessings of the Vaisnavas to be able to continue. Mahalaxmi Devi Dasi, Serbian - I am very grateful for this opportunity to sprinkle my heart with a few drops from the shore-less ocean of refreshing nectar of Śrīla Prabhupāda's words. Translating one video a day enables me not only to do some minute yet very personal service to Śrīla Prabhupāda by making his words available to those uneasy with English; but it has also become something like a daily transcendental surprise - unpacking what is hidden in each drop of mercy. Despite being more and more in awe at Śrīla Prabhupāda's expertise in presenting transcendental discourses with each contact with his words, I must sadly admit that many times due to varied reasons I have failed to hear him everyday. This service is very simple. And it is very profound. It allows me to spend time with Srila Prabhupada, serving time. For someone aspiring to become his servant, there is no greater gift than this. Mayapur Dasa, French - Hearing from Śrīla Prabhupāda, I am impressed by his expansive knowledge of the Vedic scriptures and his masterful delivery. I am also moved by his undisputable logic and ability to invoke devotion for Krsna. These cement in my heart the conviction about his position as acharya. I know I am in safe as long as I stay with him. I appreciate my good fortune to have been directed to his teachings and pray to be given this same grace life after life, until such time that I may become eligible for entry in the spiritual world. Translating Śrīla Prabhupāda allows me to better empathise with the challenge of passing-on descending knowledge. That the human mind cannot conceive of such superior knowledge is evidenced by its constant attempts to correlate this knowledge to the inferior empirical universe and vocabulary, in order to understand it, share it and discuss it . It is this blending with the empirical that inevitably threatens the superior Vedic paradigms with "watering down". Irina, Russian - I like it. We have to think and then speak, not opposite :) And we have to tell what we've heard to others. Very nice. Vijaya Baladeva Dasa, Czech - Thank you for giving me this opportunity to use some of my translating skills and at the same time to have direct association with Śrīla Prabhupāda through his teachings. Let's flood the internet with this wisdom, let's try to make Śrīla Prabhupāda's teachings well known to everybody. Lalita Gopika Devi Dasi, Latvian - Sharing my experience in translation of Śrīla Prabhupāda's nectarean messages from Vanimedia, this possibility is truly wonderful. From my studies I have understood that translation also is an art- one can use such translation means as domestication, literal translation, addition, deletion, etc. But when we translate such an exalted personality, there must be careful consideration made, how should the target text be adapted and which words should be chosen so that it is harmonically suited to the message Śrīla Prabhupāda is giving in the video. Another important thing is that Śrīla Prabhupāda is speaking in the conversational style, which means not all the aspects can be rendered directly as they are, but should be presented in the style listeners can understand as well as the punctuation marks should be chosen accordingly…undoubtedly changes there have to be made. But the system you all have given is very well thought out. I like the system that we can see which languages are taken, how far the subtitle making process is done. Another thing is also when we see already added subtitles, sometimes the “whole picture” asks for some more changes in the target language- then you kindly offer to check and, if needed, make the necessary changes. Because from my experience, when we read the translated text over and over again, sometimes we tend not to see some minor slips, as we know the text. Then this “whole picture” helps a lot. Therefore I can say that my experience with this project is truly positive. I just wish you all the strength and enthusiasm to further this beautiful service. Of course a very nice motivation for me is that translation in so many languages can be used by so many... and when i see the clearly formulated target, it is easier to get enthused. Revati Devi Dasi, German - This service, this translating of Śrīla Prabhupāda's quotes for Vanipedia, is truly direct association with Śrīla Prabhupāda. I started only some days ago, and immediately I found that whatever was in my mind, Śrīla Prabhupāda was giving some answers for my questions - forty years ago! It’s not that I did not read his books, but still, this translating is much more personal. Like in one of the first translations I did, Śrīla Prabhupāda explained, how just reading is not enough. We have to go very deep, try to really understand. And then – explain it to others! In some way, when we translate, we do exactly like this, we make Śrīla Prabhupāda's words understandable (we explain, without change) in our language. That is perfection, Śrīla Prabhupāda said. So, at least in this point I came to perfection! I feel very much blessed by being allowed to do this service! So, in order to translate one has to penetrate very deeply into the subject matter, otherwise the result will not be understandable for others. And this absorption is connecting me - to Śrīla Prabhupāda, to Lord Caitanya's mission and to Krsna Himself. And immediately some new ideas are coming – about how to use what I read, what I learn, what I understand on a deeper (or higher) level in preaching. That's what I like most, when I am translating, this feeling of being connected. Sometimes it’s not easy. The translation must be correct, but on the other hand understandable. And sometimes it is even difficult to understand what Śrīla Prabhupāda wanted to say. But then, when I really don't know at all how to do it – then suddenly, the inspiration is there. Not just which words to use to create the proper text, but also the deeper understanding, what Śrīla Prabhupāda wanted to say. And that's not just on an intellectual level. In these moments I feel the connection is there, I just have to be open to receive help. For me personally, in my present situation, this is the perfect service, because I can do it, whenever there is some time and since it is quite addictive, it saves me from wasting any time. As soon as I start (“just a little bit today”) I can’t stop anymore. I did some translations before, song texts, small books and even started to translate a bigger one. But it is not the same. Not like service, but rather like just for my own pleasure. Because no one asked me to do it. No one was asking me “how is it going on?” and - last but not least when will I be able to print it? This is different now, because what ever I translate can be read by others immediately. Furthermore this working in a team is more binding, saves one from becoming idle, gives such a deep satisfaction. I am "worried" only about one point: What will I do with my life, when this translation service will be done? I wonder where this will lead to. Madana-mohana Mohini Devi Dasi, Polish - Śrīla Prabhupāda is saying in one of his short video clips that this is his practical experience, that while doing his reading and writing work he doesn’t feel fatigued and he takes pleasure in doing that. Well, as a new mother to a wonderful baby girl I can’t relate to it fully, as feelings of fatigue accompany me twenty four hours. Not much time and energy is there to spare for any new projects. But the pleasure in doing this translation seva… works as an addiction. Once I tried I can’t stop. One clip per day. Pleasure is bigger than fatigue. That must be the difference Śrīla Prabhupāda is talking in his lecture between material work and spiritual purpose. I wish all of you to get your spiritual sense, taste, to be part in this great project, to give to people speaking in your native languages these Pearls of Wisdom. Even if you think you have no time for it, I believe you can find it once you give it a try. That is my practical experience. Mahesh Thali, Marathi - I am a Marathi translator from Mumbai, India. I'm thanking Visnu Murti Prabhu for selecting me to perform the devotional service to our Spiritual Master and Jagat Guru Srila Prabhupada. To become a translator, from what I can understood, is one must have read and listened to a maximum of Śrīla Prabhupāda's Lectures, Conversations, Letters and other books. Just reading purports of BG or SB is not sufficient for translation. This is because one has to understand Śrīla Prabhupāda (it's very difficult to understand him and Krsna.) to whatever extent one can. It is like getting in tune to His Divine Grace and trying to understand his message. And by having read alot of his literatures, it is helping me very much in performing this translation service. Without knowing truly His Divine Grace and his message, the translation is very difficult. The automated-translators can not understand the Vedic languages. So I have to translate the whole text myself, and that also without changing the basic essence of Śrīla Prabhupāda's original message. I'm also enjoying Sanskrit(Devanagari) typing, which I never did in my lifetime. This page was last modified on 16 March 2019, at 03:25.
2019-04-20T12:57:36Z
https://vanimedia.org/wiki/Multi-language_Subtitle_Project
NewTek Discussions > Archives > Lightwave Contest - WIP > A Word from The Loser. View Full Version : A Word from The Loser. As one of the 4 and only animations ( or was it three) that was submitted from the WIP thread when the bell rang, I seem to be the only one that not only didn't get placed but was not even included in any way on the reel. ONE could say nothing but that would be rather too polite. Hey? From one looser to another, I was hoping for a split second of show reel time myself! I think everyone who entered should have gotten some recognition for their work. Get over it, the rule of thumb when entering competitions of any type in every sphere of artistic endeavor is to see it for what it's not. OK, sorry... I sound like a sanctimonious retiree. :question:... links to the so called losing entries are @ ... ? Cheers buddy. It's all good. I liked what you are doing. Proves it ain't easy! But it is back to work. Nudge me! Don't wither away man. Thanks people for writing & sympathy! Don't really know why I was bothered. It suddenly became one of those times when you put yourself on the point and ask -Am I just going to say nothing? The answer is usually yes but not always, and its never clear. I too "lost" big-time :). Not one but two quick animations entered, neither one make the cut. I quoted lost because I personally don't view it as such, but rather as a chance to critique my work and improve it. I think everyone who entered should have gotten some recognition for their work. That every entry should be in the showreel? You will never see a bellow-broadcast-quality example featured on any other 3D App's main website, and for good reason (not referring to your entries personally, just in general). Sometimes the community feeling here on the forums means people take things too personally. The recognition was expressed here (http://newtek.com/forums/showpost.php?p=733993&postcount=1). Watch the Olympics. Watch how they train and enter many, many competitions. Do the same for any craft and you, too, can be a winner. you, too, can be a winner. wasn't the contest created in order to build some showpieces for LW? If you didn't win don't be sour, just work on your quality. The people watching the LW demo reel are going to be skeptics, and if something on it doesn't look good they are going to say "this is what LW can do? I'll pass!" hi-larious!You'd be a little disaffected too if you spent 4 yrs training for a career, only to find out that your degree is worth as about as much as a cup of coffee in the marketplace. Imagine the students with degree majors in things like psychology...only to find out AFTER the fact that there is NO demand for graduates with Bachelors Degrees in Physcology. Somehow, the word "disaffected" isn't sufficiently discriptive. I can understand this. Critical psychology is one of the more subjective and ambiguous sciences. Hence, its heavy indoctrination of cleverly muted and disguised religious principles. Of course, not ALL psychology is tainted with this social flavor but much of what is taught to the body of hollow minds stems from archaic social principles. I'm right there with you. Submitted but apparently not good enough for the reel. You'd be a little disaffected too if you spent 4 yrs training for a career, only to find out that your degree is worth as about as much as a cup of coffee in the marketplace. 1) PREPARE THE STUDENTS FOR THEIR SPECIFIC PROFESSION as much as possible! 2) Eliminate the WASTEFUL content that (the student will NEVER use)contributes IN NO MEANINGFUL WAY toward the first goal. There is no logical reason to REQUIRE a student to go into massive debt for courses that are simply fluff...if these Education Union idiots want students to be "WELL ROUNDED" then they need to make those classes free of charge! Most students would NEVER elect to take them...because they know full well that they simply DO NOT PREPARE THEM FOR THE JOB MARKET. That is the mission statement of secondary education...not put "Well-Rounded" individuals in the market...but rather "Well-Prepared." What does a Computer Graphics Artist need from a Natural Sciences course? Even "Art Appreciation" requirements is nothing but fluff. If you don't get enough fluff in Junior High and High School, then so be it. Outside of that, the student is footing the bill, and the state shouldn't be in the business of forcing students to "Buy" what they neither need nor want!!! Stop letting "The Man" rip you off! I agree - it's the same kinda thing here. We spent 3 years of a full time course, requiring a big fat student loan to survive on your own, only to find our 'animation' course was being run by a guy who was qualified as a journalist, and his 'animation' classes were simply him handing us photocopies of a chunk of software manual, and him reading this to us. We could have learned faster ourselves by not having to listen to him, and just read the stuff. The other problem is of course that once you give up on the tutors, you are paying like £13,000 for a course where you are teaching yourself. Ready for a career after university? Hell no. Yeah, thanks.... Just what we needed. What you are talking about is a trade school, and those died out about 15-20 years ago. While I do agree that the Universities are in need or repair. I don't agree with turning them into trade schools that only teach specific subject matter. Know exactly what you are saying. Enjoy your losership while you can. It's quite boring to be the favorite. My Point is that W-W-W-A-A-A-A-Y-Y-Y too much time is consumed in a typical 4yr degree on "Well-Rounded" education...again, fluff that flat out won't buy you a cup of coffee when you get out, and 95-99% will be flushed out of your memory banks within the first year. That's the reality of it, and these Education Union Scammers know it! Because it does not contribute anything meaningful toward the primary goal of preparing you for the marketplace, it should be kept to an Absolute minimum (no more than 10-15%). If the student didn't learn enough General Ed material by the time they graduate High School, then I say, tough luck...Times up on the education system. You had them until they were old enough to be adults. Time to stop robbing them with wasteful, worthless, and most importantly...expensive courses. It doesn't just rob them of money and brain matter, but another VERY valuable commodity....TIME. Let me give you a practical example. When I lived in the Nashville, Tennessee area, I did some research of the various 4yr colleges in the area to see which programs had a degree in 3D animation. MTSU (Middle Tennessee State Univ) was the only one. It is actually a huge college, with a larger student population than the University of Tennessee, in Knoxville. I looked over the course plan and it was the most pathetic excuse for an education in this career field, you can imagine. All you had was an Intro, Intermediate, and Advanced Animation class. The other media-related classes were more or less cross-training in other disciplines, like graphic design. That's well below what a 2yr Technical college (like ITT tech) would provide. So, in the end, a student graduating from there is far less prepared for the job market than the ITT grad...for the very same profession and job market. The key difference is, that the MTSU grad can look down his nose at the Tech School Grad, when in reality...it's the Tech school kid who should be feeling very sorry for the MTSU kid, cause he's got about $30,000 more of student loan debt to deal with, and for the sake of bragging rights and a piece of paper, he also just funked-off 2 more years of his life. The most important factor is that the ITT kid is probably going to have a much more impressive demo reel. So, given that practical scenario, who invested wisely, and who didn't? I just feel very strongly that the whole concept of Universities and state colleges is one driven by pompous aristocrats within the Education community. I see as partly driven by a self-preserving mechanism on their part. After all...if colleges and Universities DID begin to put more focus on preparing the student for the marketplace, then hordes of their colleagues would be out of a job! Couldn't have THAT, now could we? I say, let them get regular jobs like the rest of us. This idea that Universities are just preparing students for a career, but "PREPARING THEM FOR LIFE"...is the biggest line of B.S. (Bovine Scatology) known to modern man. It's the parents job to prepare them for life...not the knuckleheads governing the Education Unions! All the spanish classes I took...didn't prepare me...FOR LIFE! The Anatomy and Biology classes (took Biology in HS, yet have to learn the very same crap in college all over again) didn't prepare me...FOR LIFE. Well, over here the point of universities is not job training but to teach you scientific thinking (which also includes the ability to figure stuff out for yourself). Now, just that may be a requirement for certain jobs, but certainly not all of them. Mike - who quit uni. I didn't say I made a bad decision...I actually avoided it, but that doesn't preclude me from loathing the scam that much of the education system is...being tought Critical thinking skills is worth $30,000-$200,000? Oh, just wait until your country is full of Mexicans :D. i am speaking only from a CG artist POV: this advice is not always the best. it really depends on what the profession entails, and in CG, i believe i know more or less what works and what doesnt, at least in our locality. i personally prefer people who have a fine arts background, strong sense of design, color, etc. those that go to specialised schools may or may not have training in these things, because some spend considerable time learning software. learning software is not a good investment, imo, when i college. you can learn after college, where the software would have probably changed. i think more backbone courses, courses that make you think are essential. in general, i'd rather have a 'well-rounded' person because they are usually easier to train. i suppose you can say that it can be about how it was taught. in highschool i paid no attention to trigonometry, for example. but since getting into 3D scripting i've been getting out my old textbooks. if i was in a frame of mind that could understand the implications of studying anatomy / biology, i think i would welcome those classes. to me that knowledge is valuable especially as an artist. my only regret is that while they taught us those subjects, they didnt really present it in a way that was interesting that i could be more inclined to actually remember them. again, to me, education is learning how to think, not simply an inculcation of knowledge or skill sets. I think the worst thing you can get is specific application training. Learning an application is just a technicality compared to the other skills you need. I agree with you but some people are worker bees, they don't tend to do more than they are supposed to do at work, that's what they don't find a need for useless konwledge. I'm not targeting anyone here, but the people I've met at other studios. I know what you mean. In that case on the job training should be more than adequate though. Just as a different perspective, we have a three-tiered higher education system here: Universities, colleges and trade school. University is definetly aimed at a highly skilled, specialized and mostly academic or scientific carrer (depending on the subject that is). College (I can't find a more fitting english word) is more practical and a bit more generic. Trade school covers basically anything from a car mechanic to a media operator (note: not designer!). Usually you're a trainee (so you're on the job) for a few years coupled with a bit of schooling. Either way you choose, you can work in this industry (there is also the maverick self-taught option which I wouldn't recommend anymore, times have changed). I think the rants about the nature of the American College system are interesting. I mean You don't HAVE to get a degree. You can take classes and learn what you want a good part of the time. I know that there are many students from all over the world who are sent here because a more rounded education is seen as a valuable thing. There are British and European schools that are subject specific. There are Trade specific shools here as well...such as FULL SAIL. What they didn't teach in college, was that the @$$ kissers get ahead. A course in networking would have been valuable....and I don't mean Server/Client networking. I mean professional networking. Creating a professional image. Marketing Yourself. And your four year degree? Well, it may be devaluated because of people who choose to outsource or it may be worth less because of kids out of school using EDU copies of software on computers bought by their parents working for $150 a day are undercutting Professional rates. Mostly, I think it is a mentality of certain people in our society. There are people out there who over use the word "just". It is a word that seems more and more is used by people who either don't understand the scope of a task, or want to belittle the efforts of others. These people tend to have a false sense of importance. They believe they are special somehow and are entitled to liberties, while the rest of us should be gleeful should they deign to leave us their table scraps. We all need to take a stand against people who belittle our efforts. When, you are confronted by one of these people. Calmly, begin to explain things using language that will demonstrate their ignorance. When they feel sufficiently overwhelmed, then they are approaching enlightenment in that THEY HAVE NO F'ING CLUE WHAT YOU DO! Your education isn't useless. If you aren't using it that is your fault. If you let others belittle it, that is your fault. What they didn't teach in college, was that the @$$ kissers get ahead. Although i must point out that college was exactly where i developed a disdain for people who will sabotage your efforts because they would rather be nice. "its nice", "ooh i like the colors", the danger of everyone blowing smoke up your *** is that you will eventually start to believe them. you can also find them right here on the forums. just make a post criticising newtek. thats the truth , keep working, if u keep failing , u will keep learning from the failure and start winning. You're only a looser if you stop doing what you love to do. however, I think that a University education is actually very helpful. Look, if all you wanna do is learn Maya and get a job in ILM, then you will say "all those english, biology, physics, and history classes won't help me at all". There is some point of merit there, until you start to analyze what learning that will do to your overall creative flow. You see, all creative design - I don't care what it is that you do - has to tell A Story. This is non-negotiable. Look at the [insert name of favorite brown caffeinated sugar beverage here] on your desk. That bottle has a story to tell. It has specific engineering requirements but it also has design elements. What do those design elements say? What do the curvy lines on the bottle of Pepsi I'm looking at tell the buyer? Then there's the label.... Look at the images in Newtek's own gallery.. or the most successful images on CGTalk.com. The ones that are the most successful (note - success does not equal a positive emotion upon seeing the image! Success can also be measured if people hate the image) will ALL be telling some sort of story. Where does this Story come from? It's not all from your Creative Writing classes in High School. It's not from the similar classes in University either. It comes from Your LIFE EXPERIENCE. If you are a world traveler, for example, and have a load of LIFE EXPERIENCES then you have a lot of material to draw from to tell your tales. But if you're like most of the world, you don't actually GO very far outside your daily normal life. This is where Unviersity classes will help. Granted, I think that for art and other creative type curriculums, the classes should all be tailored towards creative thinking. Like Psychology of Art, instead of just plain Psychology.... but if you are in that psychology class, see what you can glean from it and put that creative mind to use, seeing in what ways you can use these otherwise meaningless bits of trivia to your advantage. Perhaps in the biology class or chemistry class there is something that will help you to understand why your image of an animal doesn't look quite right. Physics classes can undoubtedly help with Lightwave in numerous ways, from understanding light, to expressions, to using that knowledge to make an animated scene be far more believable. I was a student in one of these liberal arts curriculums, and instead of it getting me down I did search for these things. Turns out that I was able to see patterns in the different classes that helped me not only understand the classes better, but also helped me make some better choices in my animations. Of course, by the way I write, you'd think I was some Master Of All Things, and I'm not - but I did feel that what I learned was not wasted. It all depends on you. If you already have the stories, if you already have the art training... then what you need is a school like gnomon or full sail or DAVE school. Or go to a unviersity like SVA or Pratt which do try to explain things in ways a creative mind can actually use. yeah... sure... I think there are tacts on how to give critique as well. I'm starting to think that some users have mixed up the difference between being polite and kissing buts (HUGE difference imho). Strangely enough, being polite when giving critique works REALLY well in real life, so I try to practise that on forums as well. Glad I did the WIP part, just wished I hadn't actually submitted. Bad move -All is Vanity. My experience at college was all pretty good. Went to a state university that was okay. When I first started as a freshman along time ago, I was living at home and even under these circumstances had over a thousand dollars in grants left when all was paid for. Of course, in those days people could make so much that many just dropped out because they were bored. I did the same, but not for those reasons. Went back about 10 years ago and took up where I left off. Things were not as easy but even still, the last term I did a internship and so had enough left over from the grants and bought Lightwave. As far as technical knowledge is concerned, we were always amazed at how little of software the lecturers knew. But one day in the last year, heard this class going in the computer lab (someone in the faculty had decided to get a teacher from the tech school or something) with this guy going over all sorts of commands and trivial detail about running adobe Illustrator. Boy that was really the end of that idea in my mind. But all in all think 3rd level education, these universities, are really some of the greatest institutions created by man, that is, next to hospitals etc.The brain power for everything we have starts here. The days of De Vinci and people like the Wright Brothers are gone from this over complex world. All is Vanity. Go to a technical school to learn Maya to make an image. Go to a design college to learn to make an image look good. Go to a university and learn why you should make an image in the first place. HAHA! That's about right adamredwoods. -Spirit (Lots of writing and soul searching in the name of creativity). Though I would probably do all of these things simultaneously. a good quotable quote. i agree, however not completely on the mark. there are numerous artists (and writers) who were not educated and yet had great heart. 'education' can mean education in general, education in life. you can go to all the schools and still not have an education, still not be sensitive, or know how to think. Look at the images in Newtek's own gallery.. or the most successful images on CGTalk.com. The ones that are the most successful (note - success does not equal a positive emotion upon seeing the image! Success can also be measured if people hate the image) will ALL be telling some sort of story. that's what you want to think. but i think otherwise when i recall all of the great writers, poets, and even artists who have died without a measure of 'success' - not even recognition. and i'm so sure that many a great writer, poet, or artist is dying now as we speak. we're just too busy putting up eye-blowing, mouth-watering, socially-dependent, stereotypical CG images in our web page front banners for us to notice. i agree, for the most part, about your concept of Story. but i find the references to cgtalk-esque images to serve as an example of Story a bit anti-climactic... at least for me. Go to a university and learn why you should make an image in the first place.I doubt the 3rd one. For many professions, a University might work fine...as applied sciences may become useful. Again, it's ALL about obtaining knowledge that you can build upon and CONTINUE to use. But much of that is wasted time, effort and money for some career fields, as it will likely be soon forgotten and go unused. A broad liberal education may be a wonderful thing, if it's free...but if you are straddled with a mountain of debt, you have to ask yourself the honest question..."WHY"...why am I still paying 10 yrs later for frivilous courses I was forced to take, and didn't need at all? Universities ought to, instead of painting requirements for students with a broad brush, instead taylor EVERY single course toward the goal of preparing the student for the job market. For example, even in private art colleges, speech/public speaking is a requirement. And because verbal communication skills are important in this field, that's very understandable. Nevertheless, who would pay $50,000+for 4yrs of just general education requirements...so that they would have this diverse knowledge (again, that won't buy them a cup of coffee)? No...the students objective is not to fill their 4yrs with fluff, but to get past the fluff as quickly as possible so that they can begin to focus on CAREER training. When fluff is free, then perhaps you have a point. But until then, it's a colossal waste, with an extremely miniscule return on the (hefty) investment. If you look into the Disney Animation program it went through a period where nearly all of their new talent were graduates of the CalArts program. John Lassiter, Brad Bird, even Tim Burton. So, networking with other students and the alumni of a program also may offer benefits. And believe it or not, there are people (spelled EMPLOYERS) who are stuck on that idea of having a piece of paper. Now, I'm not saying that makes sense, just that there are some people out there who think less of people without a degree. And there are some think more of some people because they have a Masters Degree. Personaly I place value on a person. Are they good? Are they easy to work with? Are they dependable? Do they give off a weird vibe? Also a degree gives you something else....a fallback. You may WANT to be an artists with everybit of your being but just not have what it takes. With a degree you can usually find a job even if it isn't in your field. A degree shows an employer that you have what it takes to stick it out and finish what you start out to do. It ain't much, but it's still a fall back that beats dropping fries at the McDonalds. I doubt the 3rd one. For many professions, a University might work fine...as applied sciences may become useful. Again, it's ALL about obtaining knowledge that you can build upon and CONTINUE to use. But much of that is wasted time, effort and money for some career fields, as it will likely be soon forgotten and go unused. A broad liberal education may be a wonderful thing, if it's free...but if you are straddled with a mountain of debt, you have to ask yourself the honest question..."WHY"...why am I still paying 10 yrs later for frivilous courses I was forced to take, and didn't need at all? While there may be some frivolous classes out there in Universities, most of them are not fluff. Well-rounded education is key-- when you have outgrown what is taught as a skill, where do you turn to next? Sometimes people draw from other areas of study to create something new. It's one thing to have a skill and be specialized. It's another to become SUCH a master that you can create a NEW skill, something doesn't exist yet. Universities (some) offer the environment to allow that study and exploration of WHY. Yes, we can get a lot out of just "life" herself, but it also pays to know history of a subject, and whom else have studied that subject, so that we may share our learnings. Universities offer this. Specialized training to prep one for a "job" does not. I think there is a change in the US, where the traditional degree is becoming more like a high school diploma, and the Masters Degree is the new elite. I think there is a change in the US, where the traditional degree is becoming more like a high school diploma, and the Masters Degree is the new elite.The bottom line is this...if one spends 4 long years at a State University (not a Private College like SCAD or Art Institute, Academy of Art, etc), they will most likely not be prepared for this field of work....period. That is why many studios have in the employment/internship section of their website, a list of colleges that are reputable for graduating top notch artists, prepared for entry-level work in the industry. Most, if not all, are PRIVATE colleges. Funny. Why is that? Well, just examine the course outline. It's not like a University doesn't have enough time to prepare a student in this field...it's just that they waste an excessive percentage on a courseload of Gen Ed and elective Fluff. So, essentially...the student gets less than 2yrs of career specific training. You can try to justify the waste, but a spade is a spade. I have six years of distinguished military service, but it only goes so far in impressing an employer. It's the same with a degree. It may look better on a resume than one that does not...but in the end, it's ALL ABOUT PERFORMANCE!The Creative field is even more so about skill than whether or not you spent 4yrs at a Uni studying Philosophy and Physcology. Offering only 3 courses in Animation in an Animation major is both a joke and worse still, a RIP OFF! The University grad will lay down the cap and gown, search for weeks or months for a job in the field...only to find that the $50,000 education didn't do what it was SUPPOSED to do...prepare him for entry-level work in his/her chosen profession. If Universities can't meet that simple requirement...THEY HAVE FAILED, AND THEY HAVE ROBBED THE STUDENT OF BOTH TIME AND TREASURE! It's that simple. The bottom line is this...if one spends 4 long years at a State University (not a Private College like SCAD or Art Institute, Academy of Art, etc), they will most likely not be prepared for this field of work....period. Absolutely. Universities don't, and never have, offered job training. They extend your potential. Then again, eductation isn't job training either, luckily. MikeMaybe not in Europe, but in the US, parents would be absolutely livid if they learned that they spent tens of thousands of dollars to send their kids to college just for some broad general education, that DID NOT MAJOR IN A SPECIFIC CAREER DISCIPLINE. You're saying in Germany, if you want to be an accountant, or a Physician, Architect, etc., don't go to a Uni? I don't disagree with you, but I also know that Universities in the US are seen as a general education, especially state Universities. I don't think it's a surprise, nor a setback. It is just something to be considered in choosing a career. Some Universities are stronger in other degrees as well. Animation is generally weak at the university level, focusing on history and artistic theory rather than commercial prospecting. I applied for San francisco State, and that is exactly what they told prospective students. Which is good because then you won't be dismayed as to what they are offering. Again, I strongly feel the university level is the new "baseline". Its more and more common to have a Bachelor's degree, so employers have to now weed out candidates using higher criteria. For some degrees, it is needed to get additional education at a 2-year specialty school, or even a Masters. More money, yes, but this is the route these days. University vs specialty education: It's like being a Lightwave Generalist, vs a Maya Rigger. Both have places in life-- but one will know a little bit about everything, while the other may just only know a small part of the bigger picture. Maybe not in Europe, but in the US, parents would be absolutely livid if they learned that they spent tens of thousands of dollars to send their kids to college just for some broad general education, that DID NOT MAJOR IN A SPECIFIC CAREER DISCIPLINE. You're saying in Germany, if you want to be an accountant, or a Physician, Architect, etc., don't go to a Uni? Having a major in a specific career discipline is no job training. It makes you a specialist at a certain discipline... but not necessarily a productive member of the workforce. No, you don't go to Uni to be an accountant, that's where the dual system comes into play (trade school coupled with an apprenticeship). Physician yes, with a further specialisation after the (more or less) equivalent to a bachelor. That doesn't mean you're allowed to practice once you leave Uni though, you still need an approbation for that. You don't study "Physician", you study Medicine. So, in general, you study a discipline, not a career. The best thing about any learning institution isn't that it teaches you some--thing. It's that it teaches you how to learn. How to research. How to formalize and communicate an idea. As far as State colleges not preparing students for careers.... Then why do so many different companies from many different fields send recruiters to the state college in my town? Definitely a very good subject. I think one factor that does not translate well to the way the education system is set up is that computer graphics is a very rapidly changing field. The education system is based around some kind of prediction and stability. Not only is the computer graphics industry a very new development historically it is moving at a rapid pace. Both the medical and scientific branches of study seem to move slower. Yes there are advances but they are advances that take more time to develop and even longer to become accepted both in and out of the university system which already is far more integrated with the professional feild - because it has been around for centuries - than something like computer graphics which is less than half a century old. This is why a vocational college is probably more suited for this kind of industry. I think that a lot of colleges and universities just miss the ball when it comes to a CG/Art/Creative degree. These degree programs should be more like engineering, in that, the majority of the credits are teaching you things "about your discipline". I think it's just that, all too often, particularly in the state schools, the art programs get lumped into the "liberal arts" and are therefore way too broad in their preparation. If someone is majoring in History, Psychology, etc, then the Bachelor's program is really more of a "weeding out" for students who will or will not move to Master's programs. Those that don't move on aren't really qualified for much, and probably never will be, for any "specific" jobs because....there mostly aren't any "specific jobs" in those fields, outside of teaching High School or something. In that case, teaching, you really do need the more "general" education as you need all the tools to teach the subject, not just knowledge of the specific subject matter, so the more generalized education program serves that sector well. When I majored in Engineering, most of my credits were science, math, and engineering, like 90+%. I didn't have to take a language, and I had only a "few" liberal arts requirements (history, electives, english) and even the english was "technical writing". So...I think in large part you need the Universities to reform their curriculum in these creative fields, as the other two liberal arts degrees to become teachers or do advanced studies PhD, Doctor, Lawyer, or the various types of Engineering are reasonably well served. The creative fields should be modeled more like the Engineering ones, but in all too many cases, the programs are modeled after the liberal arts path. Absolutely. Universities don't, and never have, offered job training. They extend your potential. Then again, eductation isn't job training either, luckily. We in the US call it internships. Lots of majors have them, including film and video. It also counts as job experience. Sometimes they pay sometimes they don't. The problem is that sometimes (depending on location) there may not be a company offering internships in the area that you want. Some colleges have job placement programs. I think that a lot of colleges and universities just miss the ball when it comes to a CG/Art/Creative degree. These degree programs should be more like engineering, in that, the majority of the credits are teaching you things "about your discipline". I agree that maybe more classes should have been devoted to a discipline, but I think a lot of the classes I took for my design degree were very beneficial. In meeting the requirements, I took courses in Anthropology, Philosophy, Psychology, just to name a few. Drawing from what I've learned from those extra "unneeded" courses makes up part of who I am today. New approaches to problems or different ways of thinking which could also aid the creative process. Though of course, I'm on the creative side of things. Coming up with concept designs means you have to rely on your own insight and knowledge as well as utilizing references. Professions where you just learn a craft such as CG and being handed a concept to create realistically doesn't require a course in Philosophy or any other requirement a University might require you to take. You just need the knowledge of technique and to develop an eye for making realistic CG. You could learn that at a technical school. For things like interior design, graphic design, illustration, concept art, etc (fields which are increasingly using CG).... I think a more well rounded education is needed to expose you to other concepts and ways of thinking than just classes in which you learn technique. I knew a guy in college who went into his illustration courses wanting to make comic books. A couple years later he became a medical illustrator because he took a course in medical anatomy which sparked his interest. This is a good point. However, I would quite frankly question just how much "ways of thinking" people learn in this more generalized path. I think that someone who ends up getting something out of this type of system is "already" capable of creativity and thinking, therefore, they will most likely excel in any system. However, the average person, and I see a "LOT" of the average person performing nowadays, will most likely gather little from this. They'd be a lot better off in a more structured system of courses that applies directly to their interests and future employability, to be really honest. I'd also question what value this truly has against the total cost of education. My personal experience was, I had very little time, and therefore got very little out of the "overall" college experience (elective courses, college life, etc.) I worked a full time job during school and 80+ hours every week in the summers to get through college, and therefore anything that wasn't an efficient use of my time, was mostly a frustration, and not something I could be involved in, even if I wanted to. Additionally, had I left school and had trouble finding a job with my tens of thousands of dollars of loan debt...I would have been none too pleased. Fortunately I made a choice to pursue something that pretty much makes me "very" flexible and able to find a job, in almost any type of economy. The vast majority of people don't end up this way, and this is most definitely a fault of the educational system itself. I see this every day, teachers, guidance counselors, administrators telling students "oh yeah, just go to college and see what you like, and take a broad range of things, and find out who you are" meanwhile 50%+ of these students don't make it through the 4 years and/or do, and come out and get the same job they could have gotten with no degree (manager at fast food, manager of a grocery department, waiter, bartender, police officer, fireman, etc.). The worst part is, most of the students I see will have to work while going to school, will have to put themselves in debt to get through the whole process, and the people advising think nothing of this because they either a) didn't have to pay for school themselves or get much in loans or b) there is no accountability/visibility whatsoever once the student leaves the door and the administration has been able to check off some box that "such and such percentage is college bound". One always needs a General education as much as one needs to expand on their strengths through a series of more Specialized courses. Anything lacking too much on either side of the spectrum will just frustrate most people I think. This is again something that only works for a small number of students who have a lot of support, or a lot of luck. Unfortunately, and I was a good student in college, I had no way due to geographical issues and financial issues, to do this sort of thing. It wasn't a large deal for me, as my course of studies actually did apply to what I'd do on a daily basis after graduation, however, had I wanted to do an internship, it would have most likely financially bankrupted me because of the severely increased "cost" of relocating to the place of internship. Because of this, even as a better student than most, I would continually lose out on these opportunities to those who could "go off somewhere for the summer", make money, but spend most of it on rent and food for some apartment somewhere (in fact most didn't need the money for rent and food anyway). This was because the amount paid for these internships (those that even "were paid") were less than I could make on summer jobs, and I pretty much had to work locally to minimize my living costs and save money to keep going to school. Quite frankly this is a cop out answer as well, because you're already paying 10K plus for school...they should be preparing you for what you actually have to "do", not relying on a small number of available internships in private companies to fill that void. Plus, if we're talking about the majority of people, there are FAR less internships available than there are people to fill them. While I agree, you don't necessarily need an institution for either - unless a degree is a requirement in the field you want to get into. I suppose trying a few things for a couple of years might make you look like a slacker, but can actually help pursue what you want with more vigour in the end. I agree Lightwolf, you don't necessarily need a degree to be successful in your own hopes and dreams! I do have some general ed degrees in art, design and computers but that was out of ignorance of what I needed to know at the time. ... My personal experience was, I had very little time, and therefore got very little out of the "overall" college experience (elective courses, college life, etc.) I worked a full time job during school and 80+ hours every week in the summers to get through college, and therefore anything that wasn't an efficient use of my time, was mostly a frustration, and not something I could be involved in, even if I wanted to. Additionally, had I left school and had trouble finding a job with my tens of thousands of dollars of loan debt...I would have been none too pleased. Fortunately I made a choice to pursue something that pretty much makes me "very" flexible and able to find a job, in almost any type of economy. The vast majority of people don't end up this way, and this is most definitely a fault of the educational system itself. I see this every day, teachers, guidance counselors, administrators telling students "oh yeah, just go to college and see what you like, and take a broad range of things, and find out who you are" meanwhile 50%+ of these students don't make it through the 4 years and/or do, and come out and get the same job they could have gotten with no degree ...The worst part is, most of the students I see will have to work while going to school, will have to put themselves in debt to get through the whole process, and the people advising think nothing of this because they either a) didn't have to pay for school themselves or get much in loans or b) there is no accountability/visibility whatsoever once the student leaves the door and the administration has been able to check off some box that "such and such percentage is college bound". This is just so true except it isn't really the educational system so much as the people who feed off it. Without the grant system as it was in it's prime, the original American idea (middle class dream circa 1930-60) of University doesn't make sense. When I went, oh so long ago, I wouldn't even hint, none of the students worked except on weekends. They usually had an old family car, no security guards on campus, no parking fees or fines etc. Some classes took place downtown in bars (philosophy). But actually, the standard of teaching was quite high and classes small and informal. Now all the grants and entitlements have been removed and prices just gone way up through the roof. But the really depressing thing is, many of these very people who have benefited by all this, are the same people exploiting the present day students. There are a lot of people walking around the US who have 2 or 3 degrees which they never use or paid for, now kids can't even get one without being worked off their feet.
2019-04-19T10:33:52Z
https://forums.newtek.com/archive/index.php/t-87541.html?s=5d4754ed7f6693ac4c5c51decc6075e6
The adventure of picking up the plane from Just Aircraft, part 1. For the record, actually picking up the plane wasn’t really an adventure. The only thing that went “wrong” was that I expected to pull up Monday morning at 7am and find a plane wrapped, boxed, and ready to load. What I pulled up and found was the the plane was there, by the door ready to load, in all its pieces and parts. Nothing was wrapped or packaged. It ended up taking three hours with three of their folks working on it, to get everything packaged and ready to go. I remember thinking the charge for packaging of the kit was a bit excessive when I looked at the quote. After watching the amount of care, plastic, padding, and duct tape that went into prepping for shipment, I think they may be undercharging. Also as I was standing there talking to people, I found out why the plane was not ready. Not because I asked the question, but because they started telling stories of the different ways that people show up to take their plane home. Crazy stories. One guy apparently strapped the whole contraption to the top of a Jeep! I guess they don’t package anything up till they see you pull up and see how it is going to be loaded. Makes sense. He smiles and I reassure him we’ll get out. I take a look at the status of the truck. About 3-4″ deep mud where the tires have been. Beautiful roadside grass everywhere else. No indication that it would be this soft. Oh well. I ask if he has his family with him. I ask who was driving when they pulled off. I ask, “Is she mad at you?” with a big smile. “Best stay over here with me then, it is safer!” I say with a laugh. I find you can either laugh or cry in situations like this. Better to laugh if you can. Nobody is shooting at us. Nobody is dying. So in reality it is just an adventure. After taking a look at the truck and the soil, I harken back to my surfing days when I used to go surfing with my friends. I learned from Jim that before you go out on the beach, you let the air in your tired down from highway pressure to 15-20 psi. This softens the tires and improves the ground pressure. I’ve seen trucks buried nearly to the frame magically levitate out of a hole once the air pressure was dropped. My tires are setup for carrying a load on a 1 ton pickup. They run at 70 psi. As I chatted with the dad, I started letting air out of the tires. While I was letting air out, a wrecker stopped at the minivan, which was still 100 yards ahead of me. He pulled 1/2 off the pavement and half on (he’s a professional). After a quick chat with dad, he decided to leave. Except that even 1/2 on the pavement, he got stuck. I watched him for a while slipping and spinning while I was letting my air out. He eventually used some of his wrecker equipment to get himself back onto the pavement and he left. It took about 30 minutes to get me to 20 psi. I gave the truck a try. Nope, while I could move I couldn’t get back to the pavement. I let another 5 psi out and ran them at 15. After much back and forth, I was able to get one tire onto the pavement. Viola! With a 4 wheel drive truck, one tire is all I need. Thanks Jim! You never know when a life lesson from surfing will come in handy as an adult. I pulled my now exceedingly muddy truck down to just ahead of the minivan and hopped out. I was 1/2 on, and 1/2 off the road, with no wrecker lights to guard me. Also, in the 1 hour we were there, no cop ever came by even though the minivan driver had called them immediately after getting stuck. Oh well. I got out of the truck, pulled out the jerk strap, and handed it to the dad. I explained how I wanted him to hook it up to the minivan, that I didn’t want to damage anything while pulling it out. I also explained, quietly to him, that I was going to have him get down in the mud and hook everything up. He’d get muddy, but he’d be a hero to his wife and kids. If I did it, he’d be the idiot who got them stuck and I’d be the hero. No good. He nodded and went to work. While he was finding a spot to hook things up, a kid came up to me from seemingly nowhere. I looked over his shoulder, and across the highway. A Ford Mustang was nose down off the shoulder on the opposite side of the road. I seriously doubted he’d be able to back up that incline even if the ground was firm. No way with as wet as it was. Ugh! Now I had to get him out as well. I thanked the kid for stopping and told him his a very generous to risk himself to help. Just hang out a minute and I’d get him unstuck as well. After some fiddling with straps, ropes, etc, we finally got hooked up, the lower A arm on the van hooked to the back of the trailer seemed to be the best way although we tried the tie down ring on the front as well. I explained how this was going to work, and we pulled the minivan through the mud and back up to the asphalt. While we were getting straps off and prepping to cross the highway and get the Mustang out, I hear some guy barking orders. I look up to see what looked like a fireman type guy in a pickup truck, yelling to the dad that my truck needed to be gotten off the road before we caused an accident. I’d been really jovial up to that point. Helping people is fun. That guy made me pretty mad. But I just chewed my lip and he drove off. Thankfully I never actually talked to him or I may have said something not quite so friendly. Mom and the kids took off, leaving dad to help me get the Mustang out. Mustangs have a solid rear axle because apparently 1960s technology doesn’t need to be updated. It also makes for an excellent place to hook a tow strap. We hooked the jerk strap to the axle, then to the front recovery hooks on my truck. This was after crossing a highway on wobbly tires, covered in mud, while pulling a trailer. Fun. After taking up the slack, I backed the kid out and got him on the road. I thanked him again for being willing to stop and sent him back to Clemson where he was a student. During this time, an Army veteran had stopped and was offering to help as well. With the kid gone, mom picking up the dad, covered in mud and grease, and wobbly tires, I drove the Army guy back to his vehicle which miraculously wasn’t stuck at all, and bid him farewell. Then I hopped back onto 123 and went to the next exit on tires that were woefully under-inflated to be on the highway. I’m still pulling a trailer and now hunting around for a gas station where I can put air in them. It’s about 5:45 and getting dark on a Sunday night. I’m getting nervous that I won’t be able to find air in this tiny little town. I do carry a portable air compressor that runs off of 12 volts, but it takes FOREVER to pump any volume of air. It truly is a last ditch tool. Plus I haven’t used it in forever so who knows if it even works. I find the only gas station in town, find their air compressor, drop 75 cents in it, and it springs to life! Yeah! So much for no good deed goes unpunished. I fill up as best I can the two closest tires and I can tell by the end that the little pump is doing about all it can. I pull out my tire gauge and it says 28psi. I run at 70psi. Ugh. That isn’t good. Another 75 cents and I get 28psi on the other side of the truck. Now I can run at low pressure but not very far nor very fast. I look at my hotel directions and I’m only 15 miles from the hotel. It’s now full dark and Sunday night. Better to be at the hotel and figure it out tomorrow. I limp to the hotel, grab a bite, and grab some shut eye, on the way noting places that might have air at 8am on a Monday. When I look at my distance from the hotel to the factory, I find that it is only 7 miles to the factory. So I could be at the factory at 7am and get air there, or I can wait till 8am, get air in town, and then show up at probably 8:30 or so. Since I have to drive 9.5 hours that day, I go to the factory where after grabbing every air hose in the place, we stretch out enough hoses that I can spend 30 minutes, even with their really good air pressure, filling my tires back up to normal. Waiting to load the airplane while sweet, sweet pressurized air flows into the tires. Since I had three hours to spend, I was able to look at a lot of the factory again, and talk to a number of folks. But that is part of the next story. It is time to pick up the plane! Friday was crazy busy. Lot’s of running around, appointments, my first time going for my BasicMed doctor’s visit, stopping by HRJ to check on the annual for 54SS, and then finally heading to Siler City to pick up 650 lbs of beef from the processor and putting it away. While all that is on my mind, what is really on my mind is that Sunday I leave for Walhalla to pick up N41RW. Oh, I forgot to mention before, that’s the tail number that I reserved. I wanted N4RW but that one was taken, of course. The reason for the tail number? My father’s initials, and frankly his name, was RW. That is what he went by. Only my mother or the latest idiot from Deere trying to establish a relationship called him Rufus. My father encouraged me to learn to fly. He was a ball and tail gunner on B17s in WWII. He always respected his pilots and thought it was great if I learned how to fly. Were it not for him, I certainly wouldn’t have made it through, to now be passing a love of aviation onto my children. So this airplane is 4 one RW, my father. So this was my first foray into the Basic Med world. It took more doing to convince the doctor’s office that the could indeed do the exam than the actual exam took. I can’t think of when I’ve had a full exam. My 20s maybe? Everything checked out fine, the doctor was funny and we laughed through the process. She asked me if I was feeling blue or depressed (standard questions). I’m getting a less intrusive medical and about to go pick up a new airplane? Nope, I’m feeling pretty darn good! Our Lance was due for its first annual since our purchase in May. The first annual is always scary. Sure, you do a pre-buy but you never know what is going to appear as a problem once things start getting taken apart. I had 6-7k as my budget, and that was assuming nothing dramatic was found. With this final check up on the process, we came in a tad over 3k. The last first annual I’d been part of was over 17k so this was way better than I had hoped. We have a clean bill of health, a good running engine, and a nicely upgraded panel. Now we only need to address the autopilot not holding altitude and we are golden. Now it’s time to head out to Walhalla, SC to be ready to pick up the airplane first thing Monday morning. I have the truck ready to go, the trailer hooked up and ready to ride, snacks, a change of clothes, and some audio books, which I’m going to need. It is about five hours, not including stops, to get to Just from my house. I’m actually stopping at a motel just outside of Walhalla, spending the night, then proceeding on the next morning. I’ll load everything, pull an inventory of what I can see, and then head East bound. I won’t be going back home. Instead I’ll be heading to Grantsboro to drop off the plane with Robbie, whom I’ll be building with the first two weeks. That adds a couple of hours to the return trip. About seven hours door to door, again not including stops. Then I unload, turn around, and drive the 2.5 hours home. Monday is going to be a LONG day and a LOT of miles. But my phone is loaded with audio books and podcasts, and I have some snacks to keep me from stopping too often. Hopefully things go off without a hitch and come Tuesday morning I’ll be a kit plane owner. You ever had to pretend to be an adult instead of being a squealing little girl bouncing up and down and screaming? Yeah, me too. I received that first question from my buddy Dan, just as a casual aside to another conversation we were having. He’s in the Army down at Bragg. He also owns half of my airplane, N54SS. Great guy, great family, perfect partner in an airplane. But forget all that, he has connections to the coolest toys on the planet, Uncle Sam’s personal toy collection that is sadly kept behind lock and key and M16 so I can’t go play with them. Till today. I had occasion to go to something like this once before, maybe twenty years ago? I had a friend of a friend who was going to flight school at Camp Pendleton for the AH-1W Cobra. Somehow he wrangled us a quick visit to the simulator room where we could fly the Cobra simulator for about 30 minutes. It was a quick in and out, and he was a student at the time so there was a lot of bowing and scraping and staying out of the way. BUT, it was SUPER COOL and something I’ve talked about every since. So Spork and I blow off a considerable chunk of school and work and head to Simmons to meet Dan and Christian, our guide through the world of Apaches. Christian is a Warrant Officer in the Army. He also happens to be a former Major who resigned his commission and busted himself back to Warrant Officer so that he could fly more and push paper less. I liked him immediately. After going through security and getting our pass, we head straight to the simulator where Christian gives me the 5 minute instructions on how to operate a 23,000 lb helicopter. 4 minutes of the 5 is spent on getting his personal helmet on my watermelon sized head. I could hear my brain squishing out of my ears as the helmet jammed into place, but I didn’t care. I was going to fly this thing! In case you don’t know Firebirds, here is the promo. And if you want to suffer through not only the Movie, but a bad copy which seems to be sped up for some reason, here is the entire movie on Youtube. Spork in the gunners seat. While I was getting set up in the pilots seat, Spork was getting set up in the gunners seat. This would be the front seat of the helicopter, the pilot operates from the rear seat. We only had the one helmet, Christian’s actual flight helmet, so I couldn’t talk to Spork during our flight which was unfortunate. I could however tell when he fired something. I spent a lot of time trying to point him at things I thought he might want to shoot. After the flight I asked why he didn’t shoot more. I didn’t want to waste ammo. The electronic bullets too expensive for ya? We had a good laugh. He did get through a good amount of 30mm and fired off a bunch of rockets before we were done. He had fun. The last time I flew a helicopter was years ago. Like over 10 years ago. Heck maybe 20 years ago, I don’t really remember. Cyclic, collective, torque gauges, take it slow. I picked up into a hover that would make an instructor cringe, but for me it was actually pretty good considering the rust and lack of familiarity with the helicopter. I accelerated down the runway and cruised around for a minute. Then I headed back down the runway the other direction and pulled into a quick stop, which is the most fun I’ve ever had in an actual helicopter. After several quick stops, I did some hover practice and various other maneuvers I remembered from flight training. It was a hoot. Then I just did some general flying while Spork launched 30mm and rockets at whatever he felt like. I tried feeling out the Apache. I found that it was very easy to fly, except it’s not a 1400 pound R22. It’s a 23,000 lb beast of a machine. If you let a big sink rate develop, it does not just pop back to level flight like a light helicopter. You have to plan your pullout to avoid the cumulogranite. I may have overtorqued the engines a wee bit discovering that. Thankfully they were electronic engines. After my time in the cockpit, we got Spork into the pilots seat. Christian did an excellent job of getting him acclimated and before long he was flying. After the professional got out of the way, I stepped in to give some instruction. I told Spork that he was doing well and I was proud of him. I’ve flown a helicopter before dad. He has exactly one hour of R22 time. So in his mind, getting out of an R22 and into an AH-64 Apache is, “Meh, it’s the same thing.” And I guess it was. He was cool as a cucumber. He flew so well that after a few minutes Christian said, let’s let him fly at night. The Apache was designed to own the night. That’s its purpose, which I didn’t know. So we turned off the lights, set him up for night flying which involved getting the monocular even more adjusted, fiddled with a bunch of knobs in the cockpit to get the displays just right, and then turned him loose. He flew like it was no problem, taking off from a field and zooming around while the gunner was taking shots here and there. Then he flew back to another area and proceeded to make a zero ambient visibility, monocle driven night landing with hardly a bump when he set it down. I guess the eye roll was warranted. After our flight, we proceeded out to the flight line and the maintenance hanger where we talked about Phase inspections, the maintenance process, flight times, etc. Walking around the real deal. We then went out on the flight line to look at one of the birds. This was an open the panels, poke at things, and ask any question you want walk around. Other than going for an actual flight, this was as close as you can get. We opened the engine compartments, talked about the systems, and generally did everything you could think to do. While we were crawling over our aircraft, there were helicopters landing, warming up, taking off, practicing, etc. We weren’t close enough to feel the rotor wash of the helicopters, but only because we didn’t happen to be directly beside one taking off. Again, about as close as you can get without visiting the recruiter and signing on the dotted line. Dan seemed surprised I was so excited to visit Simmons and see the Apaches. I guess I never mentioned that I’m a huge military nerd, a huge helicopter nerd, and of course an aviation nerd. This trip was epic for me and scratched all of my itches. It also certainly showed Spork that there are more options in the military than just flying pointy nosed jets. I’m not saying he’s signing up tomorrow to be a Warrant Officer, but he definitely knows it is an option at this point. A huge thank you to both Dan and Christian for taking their day to show a couple of civilians around and treat us like royalty. Yesterday both owners met at the airport and took the airplane up for another test flight. Our configuration was this. GTX345 with latest software update, bluetooth enabled. Unfortunately, I didn’t think to keep on of the iPads with version 9.4.3 of Foreflight to compare. We also took our Stratus on board for comparison, although we did not use it. Ask and ye shall receive! In our last update one of my conclusions was that we needed a VFR day so we were comfortable flying with a possibly broken setup, but we needed some radar returns on the map so we had something to display. Just off the coast of VA/NC we had a decent sized green return. It was about 200 miles from us at KHRJ so we planned to fly to KOCW to make sure we were close enough to it that it would be part of our regional radar return. Here we’ve connected to 6 towers on the climb out, but more importantly we’ve gotten our second update already. The first update was at 10:30, so the every 15 minutes update that we’d gotten used to. Then this update was at 10:35! The 5 minute update that Foreflight had promised with the new update. More importantly, the cross hatch lines are gone as is the message that radar is not available. Now we are getting into the realm of where I don’t know what is correct or not. I don’t recall seeing the above example in the past as I generally don’t fly offshore. Over land, radar shows available. Offshore, we see just the corner of the green band before the radar not available section starts up. We did give this a few minutes to see if it would update with more information. It did not. If you look at the first screen capture in this post, it shows more of the band of radar returns, further off shore. For a single engine pilot who usually doesn’t usually go over water, I think it is working ok. I am however flying 40 minutes over the ocean off the coast of Florida next month so I’d love an opinion here as to what is going on from tech support. Is this because the data source is different on the ground vs. in the air? I realize it transmits differently but is it actually a different source? We did receive this error again. Only one time. We did have the download error again that we’ve had in previous flights. We received it only once. I don’t know what is not downloading, nor do I know what is causing the error. I would like to know what this error means. You can see from the data block, this is after we were well established in flight so there shouldn’t have been any connectivity issues with bluetooth. Plus with everything working, we were not adjusting connections, experimenting, etc. This just popped up in the middle of what would otherwise be a normal flight. We know from previous testing that the data from the FAA is being transmitted, and received. We know from previous testing the Garmin Pilot is able to receive and display the FIS-B data from either the GTX345 or the FS510. We know from yesterday’s testing that ForeFlight is now able to display FIS-B data from the FS510, at least over CONUS. We did not test the GTX345 to see how it was working. It appears that things are working as they should, as much as they have been at any point since our initial install. I would like some clarity on what is going on with the offshore returns. Why do we get the “radar not available” there and more specifically, why is it different than the radar we get on the ground? Will I get radar not available off of Florida (heading towards the Bahamas) next month when I try to fly offshore? I’d also like an explanation, or a suggestion on what to test, for the download error above. What is not being downloaded? What should I do to rectify this? Monday we took N54SS back to Cheraw, SC to have the transponder updated to the latest version of the software. This was Garmin’s recommendation and one I wasn’t too keen on. You see, when we connected to the GTX 345 only, things actually worked fairly well. We lost the 510’s connectivity to the iPads, but other than that, I felt that by connecting to the 345 for my trip, I could make the trip safely with proper data transfer between the 345 and the iPad. Both owners have trips in the next 30 days so I felt like maybe the devil I knew vs. the one I didn’t was the better choice. But I needed to be in FAY on Monday to fly with the Civil Air Patrol, and Daniel was available to fly with me down to CQW that afternoon, so everything lined up to have the software updated and see what happened. We did have some actionable results from our update and test flight, which I will detail with screen captures below. But before we get to the software update, let me share some of the tech support we received from Foreflight after our previous post. As usual, their tech support is email only, but in my experience, very personable and written in plain English. And more importantly, in actual English as opposed to tech support from India English. Not a minor distinction when you are having to deal with tech support. Chris here. I have been looking at your case. 1. The upcoming software update to ForeFlight (version 9.5) will allow Regional radar data to stream to the app (in addition to CONUS radar.) Regional radar updates on a set schedule of 5 minutes, compared to the 15 minutes for CONUS. 2. The cross-hatched “Radar not available” message on your map stems from a bug that is specific to integration with Garmin. It is a problem with Garmin encoding clear air as “no data.” We are working with them on resolving this issue. I currently do not have a timeline for when that may be completed. 3. An observation: per your iPad screenshots, it appears that you are simultaneously connected via Bluetooth to your Flight Stream 510 and your GTX-345. This is generally not recommended by Garmin – instead, you should disable Bluetooth on the GTX-345, and on your GTN panel set your Flight Stream 510 as the ADS-B gateway. Your installer can likely provide instructions on how to do this. This should account for each of the errors and discrepancies you reported. Please take a look at that and let us know if you have any other questions. We’re happy to help! So based on the above from Foreflight, it appears that we know a few things. One. There is a known issue between Foreflight and Garmin. Either all the other pilots flying Garmin panels out there were like myself and didn’t notice this error or we have something specific to our install. Maybe our software is newer than everyone elses? I don’t know. But there are thousands of pilots using Foreflight and Garmin every day, and yet it’s taken us this long to get to the point of “it’s a known problem.” Why is the internet not filled with these complaints? Why is there not an alert published by Foreflight/Garmin alerting people to this issue? We did do some internet research on this before calling in the experts, trying to find out what the issue was. Other than one person in an internet forum (Hi BGFYYankee!) that is having the same issue, nobody else seems to either have the issue, or notice it. In fact, our internet cohort has had to pay his shop to update his software and troubleshoot what we are now being told is a known issue. Two. Version 9.5 of Foreflight will address this issue, at least partially. It will allow regional data to stream and update every 5 minutes vs. every 15 minutes for CONUS radar. Hmm, I’ve flown about 150 hours behind a Foreflight tablet now. Until I got into my airplane with this panel install, I never noticed an issue with update rates, streaming Regional vs. CONUS, etc. When I look at weather 700 miles away, it’s blocky and pixelated. Everything within 250 miles is just fine. That I’ve noticed since day one. But having flown around thunderstorms with Foreflight (in VMC of course) I was very impressed with the resolution and accuracy of the radar returns, up to the point of penetrating light green bands and having the rain start pretty much exactly when the nose of the little blue airplane touched the edge of the green band on the iPad. I was of course watching the update time stamp very carefully at the time, and found the updates to be within a few minutes of current. I guess I don’t follow how updating at 5 minutes in a future release will be different from what we used to get just normally. Or better said, how is this enhancement new when I have multiple experiences with it working as is being described in the past? Three. We were erroneously connected to both devices, the 345 and the 510. While that is true for myself, the other owner had only ever connected to the 510, yet we both have and had the same issues. “Connected to too many devices” is something I’ve heard from our avionics installer, from tech support, and my goodness, from the internet. And I’m ok with getting rid of redundant connections, yet I’ve found it to not actually be the problem. As I said, one iPad, for the Dan who actually noticed the problem to start with, was never connected to the GTX345. I was connected to both, because I was there to pick up the plane after the install and we went through the connection procedures for both at the time and simply never changed it. While I have found the bluetooth connections to become unstable with the repeated connects, changes, forget devices, reconnect, etc. of testing, through all the testing we’ve done, I’ve not actually found much problem with the connections once they are established. Not to introduce another topic here, but I’ve found that occasionally I would get out to the runway for runup and Foreflight had no bluetooth connection. Each time, I’ve found the problem to be on the iPad, not anything else. I restart bluetooth, restart Foreflight, or on one occasion restart the iPad itself and everything connects normally for the rest of the flight. Multiple bluetooth connections seem to be relatively robust, and when they do fail it seems to be an overall bluetooth issue, not something specific to the airplane. Said another way, I have as many issues with my phone not connecting to my car stereo as I have with my iPad not connecting to my Garmin panel, and with a failure mode exactly the same. Regardless, we’ve connected to both, one, neither, all three(including the Stratus, which is wifi I know) and with as few as one iOS device and as many as four and actual bluetooth connections seem to be reliable and stable. We’ve found no evidence that connecting to more than one bluetooth source causes problems. I get the impression that the advice to connect to only one device is kind of like having you reboot your computer as step one to any tech support question on a PC. It’s always to first thing to try. We also had written feedback from Garmin. Again, they had someone speaking English, which when you are dealing with a global corporation, is a blessing. They also have phone in support, at least for the professional shop, which is great. Here is what Garmin had to say. I ran the latest blog past my engineering team and they found the GTX 345 software to be out-of-date at v2.05 (see attachment, red brackets) – the current version for the GTX 345 is 2.12. They stated that this needs to be updated to address the issue. If, after everything is completely updated, the customers are still experiencing this issue please let us know because it will need to be addressed by our engineers. Just to keep you in the loop, I recieved the following from Garmin. Disregard their request for the software versions and the Garmin Pilot app information. I sent them the link to the updated blog entry. There is no need to perform any additional testing at this time, until they have a chance to review the new information. I’ve reviewed the entire blog and honestly have to applaud the level of testing and detailed record keeping they put into troubleshooting this issue. It appears from their description of events that the issue can be narrowed down to either the ForeFlight application, the FlightStream 510, or compatibility between the two. Previously, there was a known issue with ForeFlight having trouble receiving weather data, much like this, from a FlightStream 510/GTN cockpit installation. This was supposed to be addressed in the latest version of the ForeFlight application and GTN software v6.41. Before anything else, please double-check that the GTN has been updated to the latest v6.41. It would also further our knowledge of the problem to see if this radar display issue persists on the iPads while using the Garmin Pilot application instead of ForeFlight. While the customers said they’d eliminated that possibility, the only information I see on here is multiple screenshots indicating a successful connection and streaming of weather information from the FlightStream 510 to Garmin Pilot. If this is the case and Garmin Pilot is able to receive the weather information from the FlightStream 510 it would heavily indicate that the issue resides within the ForeFlight application and it’s coded ability to connect/exchange weather information with the FlightStream 510. Please let us know if you have the chance to test that possibility as our engineers will need that important piece of the puzzle so we can determine what requires attention. So what did we take away from the Garmin tech support? First, on a personal note it was very nice to hear that they appreciated the work we put into documenting everything that is going on. I spent about 5 hours putting the first post together, bringing in all the different screen captures, writing up the post, etc. So it was nice to see that the effort was worth it and appreciated. Second, it looks like we needed to test the Garmin Pilot app, and figure out if this is a Foreflight issue or a Garmin issue (Spoiler alert, it’s a Foreflight issue, Garmin Pilot connects just fine). Third, Garmin wants the latest software on the GTX345, hence the need to fly to CQW and have it installed. Note the Radar not available message. So on Monday both owners flew the plane to CQW, which is about .9 on the hobbs meter one way. Our able tech installed the update in about an hour (including some testing) and we were then on our way. We had an impromptu trip to Charleston, SC to pick up a friend and no weather on the Eastern seaboard so it was an easy day for a flight. As you can see we had pretty much CAVU weather everywhere. This screen shows that there is no radar available beyond what you see depicted. This was allowed to percolate for several minutes to see if additional data would display and it would not. This is something I haven’t mentioned before, but maybe it will help with troubleshooting. As you can see, we are over 30 minutes into the flight so data should be flowing (and is to the GTN and Garmin Pilot). Garmin Pilot connectivity screen capture, 1 of 2. Garmin Pilot connectivity screen capture, 2 of 2. As you can see, the Garmin Pilot app shows good connectivity, no errors, and radar updates that are all within several minutes. By all accounts the Garmin Pilot app is working correctly. I haven’t spoken about this error in the past but occasionally I am getting the above on Foreflight. It seems to be when we are having problems updating otherwise. I don’t ever recall seeing it prior to this new panel install, or in other aircraft with Garmin products. The above was taken while in flight, connected to the GTX345 after the software update. Radar data is updating only every 15 minutes, again on a set schedule. 4:00, 4:15, 4:30. The above was taken while connected to the 345 or the 510, I don’t recall exactly. The time stamp was now not updated for 15 minutes. It did eventually update at 4:15. Why it didn’t update at 4:00 I do not know. As you can see, the Garmin Pilot app appears to be connecting and updating correctly during this same time period. While CONUS radar is slow to update, as we would expect, the rest of the data is updating on a consistent and acceptable schedule. While working with Foreflight, and switching back and forth to the 510 and the 345, I found at this point that the 345 had completely checked out. As you can see, everything except traffic had stopped updating and we had zero towers connected. By connecting to our onboard Stratus, I was able to restore data long enough to get us on the ground in Charleston. As you can see, the Garmin Pilot app was having no issues at the same time. Bluetooth obviously was working and passing data to the iPad. But Foreflight was unable to use or load the data. For our return flight, we’d established that the GTX345 wasn’t working so there was no need for further testing. I connected to the Garmin 510 for the flight home and spent some time testing it. The first thing I noticed was the continued lack of data available after takeoff. Because the radar updates on the 5:00, 5:15, 5:30, every 15 minute blocks, we ended up flying till 5:30 (we launched just after 5pm) before we had the first update for radar. Again, there wasn’t a spec of precipitation to show on the radar and we’ve been told that our problem is that radar only updates every 15 minutes when there is nothing to show. However we haven’t had any weather to test with other than our initial flight back to HRJ where we did have weather but the updates were still only every 15 minutes. In other words, we are told it will work when we have actual weather, but we aren’t willing to trust this statement because the one time we had weather, it still only updated every 15 minutes. This is where and when we received our first data update with Foreflight, after 30 miles had been covered in the climb. Again, there is a lack of trust in what is displayed. During this time, the GTN650 and the Garmin Pilot app had updates that were 1-8 minutes old so weather information is being sent by the ADS-B network (not 15 as we were told). It is being received by the GTX345. It is being passed to the iPad via the bluetooth connection, regardless of whether it is the GTX345 or the Garmin 510 because we see the updates in Garmin Pilot. What is not happening is it is not being displayed by the Foreflight app. Why there would be any schedule of updates controlled by Foreflight I have no idea. If new data is available and onboard, why would Foreflight not retrieve it and display it? Why wait 15 minutes, or 5 minutes? Once the GTX345 has new data, I would expect it, as a user, to flow immediately over to my iPad. However, having no in cockpit weather updates for a significant amount of time after departure makes flying in the weather a challenge. Basically there is a lack of trust in the depictions provided by Foreflight, and a lack of familiarity with Garmin Pilot. We need an acceptably VFR day but with some weather to display and we simply have not had that this fall and winter. The only weather days we’ve had in the last 30 days have been hard IFR and I’m not going to launch a questionable setup into hard IFR conditions. Two. Updating the GTX345 software made things worse, period. Connectivity was lessened and we seemed to introduce some instability into the data connection. This is very frustrating as we spent time and money to make things work not as well as they already were. This also jives with our internet friend who is chasing these same issues, as he’d already paid his local shop to update his software and it had not fixed the same problems. I actually believe if we could roll back to an old version of the software, maybe a 2016 version, we’d eliminate the issue. The GTX345s in the NC Wing for CAP are older units and my copy of Foreflight works just fine with them. Three. Version 9.5 of Foreflight is supposed to fix some of our problems and it has just come out the day after our last flight. Had we known the expected release date we likely would have postponed out test flight to accommodate this new release as part of our testing. For future reference, that would be good information to pass along to a customer. Also, I read through the release notes for 9.5, and there is no mention of this update fixing anything like what we are experiencing. Why is this problem not documented in a public location? Four. We have yet to disable bluetooth on the GTX345. Although I’m not as against it as I was before since the update to the 345 software made things worse, I still have not actually disabled bluetooth at this point. We will do that in a future test. However I still do not believe we actually need to disable bluetooth as it certainly doesn’t seem to be the root cause. We need to fly, again, to test, again, to see what Foreflight 9.5 does. Hopefully that will happen this week. We also have a flight coming up the 15th of December. This will be a non-test (family trip) flight so we won’t have the same level of data as our test flights, but hopefully this will allow some additional feedback on what we are seeing in a cross country setting with the new 9.5 release. We have another flight January 8th which will be out of the country. I fully expect to see some weather on this flight so we’ll have another data point for testing. All of these flights will be flown with the Stratus 2 as backup so that we can safely operate. Foreflight updates and operates normally with the Stratus. Something that is particularly frustrating after spending 30k on a new Garmin panel. We will not be making any adjustments to our setup between now and after our January trip for fear of making things worse again, but we are anxious to hear of solutions to our problems from tech support. Something that we can look forward to testing after our January trip. Often I have a chance to fly with someone and despite our best efforts we end up flying into dusk or into full on night time. As the sun goes down and the light diminishes, I casually reach to my iPad and do this. The reaction I get ranges from “That’s cool!” to “That’s witch magic! How did you do it.” I’ve yet to have another pilot say, “Meh.” My wife, I couldn’t even get a meh out of her, but that’s different. My fellow pilots are anxious to change their iPads too, but I can never remember all the steps to setup an iPad for night mode. Fortunately I can link the original story here where I learned how to do it. Now I can just direct my friends to this site, which will redirect them to this article where I originally learned this trick. See tip #8 for how to setup night time mode. Then apply tip #4 to put this red only mode on a triple click quick access. But DON’T actually do tip #4 as it’s described. I find invert colors just confuses things and red only mode is all you really need for night time flying. Also, red only mode does wash out the display of some charts. If that’s an issue for you, it’s quick and easy to triple click back to normal colors, see the thing you are struggling to see, and then triple click back to night time mode. With the screen auto adjusted down to minimal light anyway, it doesn’t seem to hurt much to look quickly at normal colors then switch back. Today I believe I have discovered the root cause, or at least the source of our error. It appears to be the Garmin 510. I will describe what I did, and what I found below. The suggested resolution is at the bottom of this post. I received the following email, forwarded to me from our installer today. This was from the Foreflight tech support. “Could we try and get some detailed information from you? While connected to your Garmin devices, open ForeFlight select **More** (**Menu** if using an iPhone)> **Device** > please take a screenshot of this page >Next, tap on the Flight Stream Connext tile and take a screenshot of this page. Today I decided to go back down to the airport and gather the requested information. Nothing was mentioned of whether I should be in flight or not to show this data but I assumed we needed to be receiving ADS-B towers to get usable information so I went up for another test flight. The weather was once again severe clear with no precipitation anywhere in the area. For this flight, I again went up with the same two iPads described in the previous post. I also took the same Stratus 2. One iPad was initially configured to bluetooth to the Garmin panel, one with wifi to the Stratus. The first screen captures were from the iPad connected to the Garmin panel, both GTX 345 and Flight Stream 510. The iPad connected to the Garmin panel, both to the GTX345 and the FS 510. The above was the configuration right out of the box, after takeoff and once towers were in view and data was flowing. The below was what the map screen looked like. All of these were taken within a few minutes of one another, as fast as I was able to take them, fly the plane solo, etc. iPad connected to the Garmin panel, map view in Foreflight. Note the “radar not available” message. Since our previous test, I’d installed Garmin Pilot onto my iPad mini that normally is used with 54SS. I still don’t have any experience with Garmin Pilot, but I tried to grab any screen shots that may prove useful. Again, these are taken on the iPad connected to the Flighstream 510 and the Garmin 345 via bluetooth. Here I have an iPad connected to the panel via bluetooth, and the Stratus via wifi. It was an accident that I had both at once, but I thought it may show you something as the radar was being received and there was no error message. Once I had captured the requested screen captures, I started doing some experimenting. Here is what I found. iPads connected to the Stratus showed no errors, as before. iPads connected to the Garmin panel, both to the GTX345 and Flightstream 510 showed the “radar not available” message. This error is consistent. iPads connected to ONLY the 345 or the 510 showed the same issue as before, “radar not available” the same as in the test on 12/1/17. However by allowing about 15 minutes for the data to stabilize/update/whatever, I found the iPads connected to the GTX345 ONLY would eventually remove the “radar not available” message and the grey crossed lines. However, iPads connected to the 510 ONLY would continue to display the “radar not available” message for as long as I let the test run. In our prior test, we did not allow a full 15 minutes for the changes to take effect, mainly because when we connected to the Stratus the error message disappeared with 60 seconds so we expected a similar result from the GTX345. That is the difference in this test on 12/4 vs. the last test on 12/1. I was able to replicate the issue across two iPads. Whichever one was connected to the 510 only, showed the problem. The problem could be eliminated by connecting to the 345 only, or of course the Stratus. I did note that even when the iPads were working as we would expect, with no “radar not available” messages when connected to the GTX345 only, that the data refresh was different still than what is shown on the GTN650. The data would consistently be only 2-3 minutes old on the GTN, but maybe 10 minutes old on the iPad, even though it appeared to be working properly. It appeared as if the 15 minute update issue is still present, but it’s not 15 minutes now. It’s 10 minutes, or 5 minutes. As you look through these screen captures, understand that I am flying single pilot in this test, with the sun going down blinding traffic search in about 40% of the sky, trying to avoid traffic that was in the area. I couldn’t just stare at the update results, but it appeared that every update on the iPad was at 4:30, or 4:35, or 4:40. I didn’t see any updates at 4:32 or 4:36, it was always on an even number (I know 5 is odd, I hope you get my meaning.) Here are all the screen captures I took at all stages. Note the update time stamp showing the time sequence I reference, while the actual time stamp at the top changes. When I would pull up the GTN650 time stamp, it would consistently be updated to what I’d call a random time, 4:37, 4:29, etc. The GTN time stamps are also consistent with what I see when flying other airplanes, the data block on Foreflight doesn’t update on even times when flying those planes. Picture of both iPad iPad connected to GTX345 only and GTN650 with update as of 2-3 minutes ago. Picture taken at 4:54. I apologize the above picture is blurry. Single pilot and all that. As you can see in the above picture, the picture was taken at 4:54. The last update is 4:40, which has just turned orange indicating it has gotten old. This was during testing, so it took a few minutes for the iPad to update to the latest information after it connected. However in this instance it took several minutes, till 4:55 if I remember correctly, which you can see the result of in the previous screen capture just above. However what is concerning is that even with a proper connection, and old data, the iPad doesn’t update from the GTX345 for almost 15 minutes even though the GTN650 has data that is only 2-3 minutes old. It’s possible that this is just a reporting issue where Foreflight reports updates differently that the GTN650, however since this is an issue that I don’t see replicated in the CAP aircraft I fly with the same iPad, I think it’s another issue with our installation. Interestingly, when I connected to the Stratus, the radar not available message goes away almost immediately. When I connect to the GTX345 it takes as much as 15 minutes for the message to go away. However I seem to recall that the Stratus buffers/saves data so that it can be delivered in a burst when connected to an EFB. Something about it being used with devices that are operating on batteries. I assume that the GTX345 has no such feature since it’s designed to run on ships power and operate continuously. Since the 510 is such a small and easily removable device, my recommendation is to ship a replacement 510 directly to myself to be installed locally. I can then test fly with the replacement 510 to see if the problem persists. This is opposed to shipping the 510 to Pee Dee in SC, necessitating a another flight to maintenance. If the replacement does not fix the problem, we know we have a software issue. If the replacement solves the problem, we know we had a bad device. Regardless, we can return the extra or bad 510 once we have tested. All it will cost is overnight shipping both ways which is less than the cost of a flight to SC. However tech support may have a better idea once they have this additional data. This page is published as a public repository of all of the information we have concerning the issues we’ve had with our ADS-B install on N54SS. It is intended as a central knowledge reference for any and all technical support who are assisting us in coming to a successful conclusion to our ongoing issues. Nothing stated here is intended as a slight towards anyone or questioning anyone’s expertise or efforts. It is simply a statement of the facts as they are known at the time. I will update this page, or this blog, with our progress in solving the issues going forward. All images in this post are clickable to improve detail clarity. Many require clicking to see the needed detail. This is a summary of the issues we’ve seen and continue to see in our testing of our new Garmin installation in N54SS, installed by Pee Dee Avionics in Cheraw, SC between August and September of 2017. There has been no indication of issue with the actual installation, only with the devices once they were installed. This is also detail of the test flight data we’ve collected from multiple test flights (11.4 hours total) over the past months, but primarily it comes from a three flights conducted on 12/1/2017 as these contain the latest information. The reason for conducting these tests in flight is because our local field (KHRJ) does not have an ADS-B tower in view from the ground. However, immediately upon takeoff we have multiple towers in view. Most flights were conducted orbiting near the airport at 2000 feet MSL. There are two owners of N54SS, Dan Moore, and Daniel Cooke. They live one hour from each other, and the home airport (KHRJ) is directly between the two of them. It is unusual for both owners to be at the airport at the same time due to scheduling conflicts. Most test flights have been conducted by Dan Moore, but thankfully the test flights on 12/1/17 were conducted with both Dan’s in the cockpit. All in panel avionics powered and operating normally. Primary iPad mini in panel mount, iOS 11.0.3, connected to GTX345 via bluetooth and Flightstream 510 via bluetooth. Secondary iPad 4, iOS 11.1.1, connected to a Stratus 2 via wifi. Iphone 6s, iOS 11.1.2, connected to GTX345 via bluetooth and Flightstream 510 via bluetooth. Additionally on our last flight on 12/1/17, we connected an additional iPad to the Flight Stream 510 via bluetooth. This iPad was running Garmin pilot. We traditionally only use Garmin Pilot to update the database. The latest data from ADS-B information block as shown in Foreflight and noted by the arrow. The data in this picture has been expanded to show the additional time stamp info in this instance. We have had several failure modes since the original install of the Garmin products. The first was that data block, as noted in the picture above by the arrow, showed that “no data” was being received despite being connected to multiple towers. We were able to receive and display ADS-B traffic and METARS, etc, but no radar. This issue was corrected, we assume, by a software update performed by Pee Dee Avionics on 11/6/2017. It has not manifested with this particular failure mode since the software update. However the flight home from KCQW to KHRJ is shown above in Flightaware.com. While there was no weather in SC that day, there were returns in NC, some relatively close to the flight track. This is the flight where the data block only updated every 15 minutes. With weather on the radar, the regional radar should be updating every 2.5 minutes. Also, other aircraft I fly in with GTX345s update much sooner than every 15 minutes. However, this issue has since disappeared and the updates seem to be happening normally. There was no service work performed to explain this change in operation, nor software updates to the panel mount avionics nor the iPads in use so we have no explanation for why the updates are back to being every few minutes. When we flew the last series of test flights, there wasn’t a radar return for 500 miles so by the explanation we were given, it should have updated every 15 minutes instead of the 2-5 minutes it was performing. On 11/24/17 N54SS was flown to KHEF and back on a trip spanning several days. The pilot reported that his Foreflight display showed “Radar not Available” on his map in Foreflight for the entire flight. This iPad was connected to the GTX345 via Bluetooth. Radar not available. It’s only viewable when zoomed in very closely. On the first test flight on 12/1/2017 I first flew the airplane alone, waiting for the other pilot to arrive from work, and to test another problem that we had (this one was pilot induced). I did not note the “Radar not available” on my iPad but I found when shown by the other owner that it was indeed there. You need to be zoomed in fairly tight in order to see the words. There is also a series of cross grey lines that I had noticed but didn’t recognize for what it was until it was shown to me by the other owner. When in this failure mode, despite showing radar updates per the data block, the entire map is greyed out and “radar not available” covers the entire chart. After being shown the failure mode, both owners took the airplane back up for another test flight on 12/1/17 and were able to demonstrate the “radar not available” issue readily. We compared the iPads connected to the Garmin panel mount avionics to an iPad connected to a Stratus 2. The picture below is a side by side comparison from in flight. This picture is full resolution so you can click on it to make it bigger. Only at full size do you notice the grey line pattern on the left iPad. It’s much easier to see on the screen capture above. If you click on the image to view it full size, you can see that there are faint gray lines on the chart on the left iPad. The iPad on the right shows no such lines, it is receiving radar. For testing purposes, we connected and disconnected various iPads to the panel mount avionics, and to the Stratus. When the iPads were connected to the panel mount avionics, the radar not available issue was consistent regardless of the iPad in use. Also, when the iPad not receiving radar was connected to the Stratus, the “radar not available” went away immediately. This does not appear to be an iPad issue. In addition, we grabbed a screen capture of the failure mode on the iPhone. Here the grey cross hatch lines can be seen easily. However it is not zoomed in enough to see the words “radar not available” except at the very bottom of the map behind the glide information. You can also note that the iPhone showed no data. There was a period of time on the second test flight where all the connects and disconnects seemed to disrupt everything. Devices didn’t match each other, some had issues connecting, etc. This was a test environment with lots of button pushing. We landed, powered down the airplane and avionics, and powered down all portable devices and booted them fresh for the last test flight. Picture taken at 4:37. The time is just cut off in this picture. We also tried to compare the data from the GTN650 to the data shown in Foreflight. This picture is also full resolution meaning you can click on it and get a better view. The FIS-B data on the GTN650 shows three minutes old at 4:37 or it was received at 4:34. The latest data block on the iPad shows 4:25, 9 minutes older than the data shown in the GTN. Again, this iPad is connected to the Flightstream 510 and GTX345 via bluetooth so I have no explanation for why the data on the iPad would be older or at a different refresh rate than the GTN. When we landed again, we decided a last test would be to install Garmin Pilot on one of the iPads and fly again with Garmin Pilot connected to the panel mount avionics. Neither of us are very familiar with Garmin Pilot so there was some time spent trying to find similar information in the application. These screen shots are as close as we could come to showing some helpful information on Garmin Pilot, because of our limited familiarity. Lastly, I routinely fly with the Civil Air Patrol’s NC wing. NC Wing has equipped all 17 of their aircraft with Garmin GTX345s. I connect my iPad, the same one used in these tests, to the Civil Air Patrol aircraft via bluetooth when flying. My experience in the CAP aircraft is that everything discussed here works flawlessly. The data block updates every 2-5 minutes regardless of weather conditions, or lack of weather, radar is shown (no grey lines, no radar not available), METARS update, etc. Also, in flying with the Stratus 2, I find that the issue discussed above also do not happen. In our testing on 12/1/17, we found that the Stratus began transmitting data to the iPad almost immediately upon takeoff whereas the GTX345 took several minutes to begin transmitting data. We never once had an issue with update rates, radar not received, etc. from the Stratus. There have been several theories submitted for what is going on. Radar data only updates every 15 minutes because there is no weather to report. There are too many devices using Bluetooth, and it’s causing a bandwidth issue. For the first theory, the Flightaware.com track from 11/6/17 for N54SS shows weather in the region. Per the details forwarded by Pee Dee Avionics, updates should have been happening at 2.5 minute intervals because there were radar returns in the immediate area. I do not believe the update rate from the FAA is the issue. For theory #2, We tried several things to mitigate connectivity issues. We flew with only one iPad connected to the panel mount avionics, and nothing else powered up. We removed several devices from the bluetooth page on the GTN, narrowing down to the test device or devices depending on the individual test. We connected different devices at different times to eliminate any issue related to one device. Basically we rotated all connections through all devices to make sure no problem stayed with one iPad. We powered everything up, Stratus, all iPads, iPhones, a non-apple phone, everything. With the exception of the non-apple phone, everything worked as before with no inconsistencies other than the problems described. We were not able to connect the phone during our test, but it had been connected previously. It is the one we utilize to update our databases. Finally we connected devices that were not showing radar to the Stratus. They would immediately eliminate the “no radar” message and the grey lines. We would then reconnect to the Garmin and the “no radar” message and lines would reappear. The issue is repeatable and tied directly to the Garmin products installed in N54SS this summer. From a customer’s experience, these issues have been present, in one form or another, since the products were newly installed. We have eliminated iPad issues and Foreflight issues via our testing. These issues are a direct result of our initial install with these products and I believe are a software issue internal to the Garmin products. I do not know at this time if the issue is related to the Flight Stream 510, or the GTX345. After 11.4 hours of testing, I’d like a better opinion of what we are going to fix before I make another flight to either test or reposition to maintenance in SC. In May of 2016 I, along with my then 12 year old son, attended the AOPA fly in in Beaufort, NC. Since I was already there, I decided to take the Rusty Pilots program. I was of the opinion that I’d forgotten too much to get back in the air, and I was a bit nervous to go into a classroom and demonstrate just how much I didn’t know. I still had a pilots license in my pocket and as long as I didn’t prove I was clueless, I was technically still a pilot, able to impress people at parties and tell funny or harrowing stories on demand. If I attended class and proved how much I didn’t know, well then that would be more than embarrasing, it would be tangible proof I no longer was a pilot. But with the class available right there with the fly in, I really couldn’t not attend. I’ve been a consistent AOPA member since the early 90s, and I’d always considered going to MD to attend the fly in but never could justify the trip. Major kudo’s to AOPA for bringing the fly in out to the field. I don’t know how I’d have gotten back in the air if I hadn’t attended. I’d stopped flying in mid 2004 when I had to take over our family business when my father had cancer. I’d gone from flying a King Air 200 solo to flying a desk, and I did so from 2004 to 2015 when I sold the family business. The sales process was brutal, and one of the things I told my wife as I went through the 1.5 year process was I’m going to buy an airplane when I get this done. She was always supportive, except for one time I’ll get to in a minute. Step one for me was to get my medical back. Luckily it was a non-event for me. I’d recently lost 60 pounds and I was now farming every day so I was in pretty good health. With the medical out of the way, step two was to find a partner in an airplane. I don’t hang out at the airport or really know anyone at the airport any longer. I didn’t have a way of contacting someone like minded. There were no flying clubs near me that offered what I needed (a six seat go places airplane) so I was stuck. In desperation I placed an ad on Barnstormers looking for a pilot/partner. I only had one response, from some guy who was still getting his PPL. No way he’ll be able to get insured in the type of aircraft I need. He’s just dreaming. It turns out he has been the perfect partner and I couldn’t be luckier. He and I purchased a 1978 Lance in May 2017 and we’ve already put over 100 hours on it. We just upgraded the panel to a GTN650 with the Flightstream 510 to let the iPad talk to the GPS wirelessly. Modern avionics are AMAZING and they are one of the reasons I’ve come back to flying. While I was at the AOPA fly in, I also talked to the Civil Air Patrol recruiter. My son was 12, the perfect age to join. I shoe horned him into a conversation with the CAP recruiter and stood back proudly as I watched her lure him in. As I was standing there, a Marine pilot asked me what I did. When he found out I was a pilot, he put the hard press on me to join CAP as well. Have you ever tried to push someone into the pool, and ended up falling in yourself? That’s what happened to me. So in addition to flying myself and my family around, I am now 1st Lt. Moore, flying CAP aircraft in training and on missions. Between my plane, and the CAP plane, I have over 110 hours in the last 12 months. Most of that (80 hours) is in the last 6 months. Things are picking up. Now back to my wife being supportive. Before I purchased the Lance in May, I’d mentioned, again, that I was going to buy a plane to the Mrs. “I don’t know why you’d do that. That’s a bad idea.” I was immediately defensive, but she continued, “If you are going to get a plane, you should build a plane with your son. That would be a great experience for him and for you.” This was back in 2014 I think. Since that conversation, I’ve been to Sun N Fun and Oshkosh (both for the first time) and have finally selected our project. It is going to be a Just Aircraft SuperSTOL, which was purchased in October 2017. We start in March of 2018 with hopes to be flying by spring of 2019. I’ve already started a blog to document the build at farmerflier.com so hopefully anyone later can learn from my mistakes. In January 2018 I am flying the family to the Bahamas for a vacation, our first aviation vacation as a family. So from a Rusty Pilots seminar in 2016, I now fly as a volunteer pilot for CAP, fly a Lance for myself, and am building an airplane, and am in the process of being an airplane blogger. I’ve completed my EAA Young Eagles checks and am signed off as a pilot there. Once the SuperSTOL is finished, I plan on flying the wings off of it for Young Eagles flights. I’m more involved in aviation that I’ve ever been. So what got me back to flying? Swallowing my pride and going to the Rusty Pilots program was first. I went from nervous, to excited, to actually a bit bored at one point. I remembered SO MUCH of what I needed to know. The rest wasn’t the difficult to relearn. IFR proficiency has taken a while but the basics of flying came back like they’d never left. Was I passionate about flying? I still read Flying magazine and AOPA’s magazine, but that was about it. I could have never flown again and been happy. I’d been there and done that, and I had plenty of other chores to keep me busy (I was working about 80 hours per week during all of this). What changed for me was ADS-B, in-cockpit weather, in-cockpit traffic, GPS approaches, and Foreflight. The amount of information that I have at my fingertips when flying a NORDO aircraft is well beyond what was in the fanciest jet when I stopped flying in 2004. Situational awareness is ridiculously easy and the real struggle is to still do the homework on the ground because so much is available in the air. The AOPA Rusty Pilots program was the spark that relit the fire for me, but the technology is what is pouring gas on the flame. My iPad isn’t simply a lightweight replacement for my old Jepp binders. It is a safety item that I won’t leave the ground without. As more technology becomes reasonably available, thanks to the efforts of AOPA and EAA and the FAA (i.e. Garmin G5, TruTrak autopilots, AoA, etc), I think aviation gets better and better. I don’t need an $800,000 Cirrus to be mission capable. An affordable older aircraft will give me more than I ever had before with an iPad and a required transponder. Or a SuperSTOL flying at 500 feet and 100mph has all the capability of a Gulfstream, as long as I don’t want to actually go anywhere. Today, Dustin and I made a $100 hamburger run to Virginia. Except we weren’t hungry, we were thirsty. We’d been on vacation about a month ago in Virginia and had toured the Virginia Distillery Co. I’d brought home a few bottles of their different whiskeys and seemingly before I could blink they were all gone. I’m not saying SWMBO drank a bottle of whiskey nearly by herself in a week. I’m just saying there was definitely a containment problem with the whiskey and by the time I’d gotten to the bottle, it was pretty much empty. No clue how that could have happened. Today the weather was severe clear and not a cloud in the sky. Dustin and I trundled down to the airport about 8:30am, after a mandatory stop at Angie’s for breakfast. We pulled the plane out and fired it up and headed North. We did a quick jog to the West to stay clear of RDU’s airspace, passing over Jordan Lake and then Chapel Hill before turning due North towards Lynchburg. The pattern was busy at Lynchburg, the flight school keeps the controllers hopping. But we fit into the flow after some adjustment and I had a landing of 4 out of 10. Dustin politely said he’d had worse. We made introductions in the FBO and borrowed the courtesy van to make the drive to the distillery, about 35 minutes door to door. We arrived at the distillery and ran into some of the same great folks we’d run into before on our tour. When they found out we’d come just to buy whiskey for our home bars, they were pretty excited. Is there a limit to what you’ll do for whiskey? I’m not sure I know of one. What I didn’t tell them was it was SWMBO’s birthday this month and I wanted to bring her back some of the ingredients for the drink she loves so much so this really was a honey do item on my list. We loaded up a case of booze for our return trip and high tailed it back to KLYH for our return flight. We shot the GPS 23 approach into KHRJ, testing out the new avionics setup. It mostly worked well, but the glideslope capture didn’t work for some reason. Oh well, more to learn and test I suppose. At least my landing didn’t break any of the bottles in the back. Mission to get booze via airplane. Check!
2019-04-20T20:15:54Z
https://farmerflier.com/page/7/
By way of example, composition composing is an ambitious endeavor. Discourse of up to date research and academic resources must be review early within the paper. Luckily the process service essays for creating a dissertation document was perfected over a few years. Instruct each applicant to spell out on paper the way to cope with the position. For instance, in case your novel is all about Self Improvement, consider how you’d like to meet your readers. If you’d like to improve English writing, you need to recall there are many different techniques that you’re able to do this. A number of the leading publications to begin with on non fiction books as they’re written in fairly clear-cut language that’s not rather challenging to grasp. These are a few of the essential hints about creating, which could be adopted. For your bouquets, you’ll require 3 different kinds of some greenery plants plus some bow. So reading is really essential. Writing Language isn’t a really easy work. Reading paper daily is, in addition, an extremely good and strong method of enhancing English creating. Studying many a few kinds of novels and publications is among the best and most effective strategies of boosting your writing skills. That’s my advice to you individually if you’re a instructor who does like to cease. I’m heading to become a history teacher. Although this really is actually a position that is certainlyn’t broadly approved, some teachers nonetheless believe that holding a kid’s attention might actually be the lone most vital variable in learning how to examine. Moreover, the instructor will be to make certain there is enough pausing, in the appropriate period, in what just is mentioned. Regardless of what form our figures take, for the aims of authorship, they are able to be human. Therefore there is a lot as you are able to do along with your papers and documents. I want you all respectable composing, and also the friendship of superb characters. In conclusion, you’re in need of a contemporary medical rhetoric you might be related to your own writing (24-25). Thus, you must discover means to test every applicant. This evaluation is conducted by somebody apart from the worker’s section. What sorts of tests needed is depending on the research topic. Plainly, an interested child ISIS most likely to be more enthusiastic about learning. It may possibly be implemented within the type of the written test or might be with respect to the authentic presentation of skills. If you reach 3 cm prior to the heart, cease. There are numerous different types of English essays and articles which might be offered you can utilize as a way of boosting your English-language skills. Activities and services which you provide are according to means to creatively maximize your own skills. Some educators advocate producing just somewhat publication from the child’s sketches. There’s absolutely no means inside this centre to expect each among the potential foci which may be requested in article requests, but it might notice the finest way to locate them–and readily. Authorship is merely a genuine artform. The dreaded faculty essay is the most difficult in reference to really coming up together with a subject to reveal. The standard pupil composition offers limited real estate to show a point. And so they must be deemed before a rookie enrolls in it. To the conclusion that things appear coherent to the audience, it’s constantly an excellent training to ensure that each of the items are correctly ordered within the desk.”My British teacher informs me that sort of things continuously! From that time, the stuff can begin with a concise summary about what the matter is around, then accompanied by the primary body of critical points the writer identified on the topic accessible. In the event you happen to be writing something which should have an pro feel to it, there’s no better tactic to do than to utilize English composing themes. With respect to composing, seeing films is inadequate. It must provide the essential information, also it needs to perform therefore within an fascinating style, therefore it is going to once more attract the viewers to complete reading the entire post. Consider your market along with the topic of your own post. It really is the kind of papers you’d compose before writing a remedy paper. Utilize the extremely same guidelines and syntax which you’d if you were composing an internet post. Twitter can supply you with the practice you must develop into a much better, faster, more succinct check grammar mistake online and superior writer. Use the principles of teaching to your own post writing practice. There’s practically no talk about the structure of the properly ordered article. Interview documents enable you to use people as your own resources as opposed to books. Itis important, although producing a research paper is not easy. This really is a fundamental technique that may permit you to instantly compose a brief focused informative composition which one may use on your own school conditions. Writing forms a vital element of many individuals evening. Again, simply begin writing. No matter your objective, only keep on composing. Composing a curriculum vitae could be challenging, but there are various sources reachable to help you. An one of a kind generator of imaginative inspiration. Writer must also be grammatically perfect whilst writing the proposal. The style of this type of composition is quite evident as we’ll find in these paragraphs. Particular purposes should be held in mind when composing an approval notice. Writing an article is actually not a tough task once you understand the structure nicely. Should you be creating the letter as an alternative to typing, ensure the handwriting is readable and clear. Composing a suitable cover for an article you’ve written isn’t an incredibly challenging undertaking whatsoever, but it’s the most ignored. In the next post, we’ll look into the synopsis of an argumentative essay in more detail and attempt to comprehend how an ideal argumentative composition ought to be composed. Contrary to other article writing companies, we’ve got authors which are educated on how better to compose academic papers with a few composition composing designs. See our own article writing business and relish special and skilled essay services. Additionally, It provides the future direction of whatever is contained within the essay. Being in the region of article writing for those years, we’ve become a worldwide essay writing business. We’ve trained our composition authors on numerous citation styles which are frequently used by diverse academic degrees and institutions. It’s been my argument our knowledge institution aims reduced, and hits their goal. In school, I actually like technology and mathematics. A suitable check for plagiarism format is crucial for the progress of powerful essay on any special matter. First and foremost it’s vital to go for an essay topic. It is going to house the critical content of the article. The moment you’ve picked a subject, now’s the time to really write the article. Whichever kind you select adhere to it for precisely the same written piece. The 1st format that one may use might be the block structure. Within this sort, most of the text within the notice is left – aimed. Critical thinking is merely a language which has service essays existed for a few years. This really is the cause it is essential for managers to help you to perform some critical thinking prior to making the decisions. You do not automatically comprehend how you are feeling about a topic or what you would like to state regarding the matter you let the research including your own considering to find out the outcome. There’s a superior bargain of misconception concerning the topic of critical thinking. The best method to produce critical thinking is really to write. Basically, critical thinking is about utilizing your capacity to cause. Critical thinking is not going to imply that you simply’re going to detect faults in others’ articles but you need to read an article considering all its facets the author would like to indicate and conceal. A crucial thinking business which is certainly good enough to publish for you, are going to have whole cell of certified and skilled authors and thinkers who’ll be capable enough to think about any subject delegated to them critically. Actually, this really is just what the professors need. Pupils should prepare yourself to respond to numerous justifications, ideas, in addition to styles. The forms should really be proofread in order to make sure that they’re free from any errors. A author is permitted to work in your own papers only if she or he’s competent to show her or his mettle in educational writing. Properly, my very first guideline for writing an article is always to produce particular you have a superior subject of debate. Okay, therefore my second key guide for composing an article is consistently to ensure it is apparent where you’re heading. The reason for a rough draft would be to get your own personal thoughts in writing. It’s quite essential to generate a powerful thesis statement. Therefore put down your plan for the article in the starting then utilize the balance of the composition to actually develop your discussion. Criticalthinking writing is among the most interesting kinds of academic writing. Critical studying differs than suspicious studying. See all 16 pictures generally the standard system adjustments are wonderful. This synopsis includes five common phases of studying. It’s actually a kind of reading most people should engage in frequently. If you discover something perplexing, search for phrases with numerous meanings. The word” critical” has negative and positive meanings. Today, needs one to truly read through the entire article slowly and carefully, searching at each sentence, each term. You’ve skimmed via the essay quickly to possess the gist of it. Make certain you have a really clear purpose which you need to convey in your article. Concentration merely on what nicely the employee done throughout the present evaluation time. It’s extremely crucial to obtain a second opinion on your own essay and occasionally when you have been working on an essay for a extended time it could be very hard to take on an first objective stance and analyze your article. Perhaps it’s helpful to take into account of an article regarding an or discourse using a classmate. The reader should understand what you are stating and must realize about the class which you’re using within the article. Since it’s the 1st paragraph of your own composition, your introduction should help readers recognize your favourite topic. Thus, if the name is really in the kind of the query, make certain to answer the issue. You’ve got to ask queries, and then you should strive to answer them. Be certain you have clarified all of the questions you have raised throughout Before – Reading and Critical Reading. Article writing doesn’t need to be tough. A good deal of preparation ought to enter your composition before beginning writing it. Actually if writing an essay based on personal experience, it actually is good if you’re able to back up your own views with details. It was a standard assertion which is surely eye catching but nonetheless expresses the overall issue of the article. Often times, trained upon the form of essay, the 2nd body paragraph might be used to produce a concession and rebuttal. Select the format that will be most effective for having your point across clearly. For the moment, nevertheless, beginners are going to learn the fundamental essay format. What you end up doing here substantially is founded on the kind of composition you’re considering composing. This could make the fundamental skeleton and precis of your own article. He performed with the guitar, an instrument he’d utilize throughout his lifestyle. Utilize this entire composition construction to earn a detailed outline to your own essays. Think of your own opening for a map of your own article, or perhaps for helpful information. The introduction or the opening paragraph is fairly a pertinent characteristic of your article due to the fact it states the principal notion of your own structure. Because this short article is only about how to begin writing good English arrangements, let’s now delve straight in to that. On paper a composition or an article, it’s crucial to understand your examiners are not only going to check out your articles and also appear at important things like your syntax, punctuation, alongside your type of writing. That is fundamentally the conclusion of your own composition. Taking into consideration the way in which you can put this source in your paper is easily the most critical part this procedure The writing of the disclaimer may be tricky task for somebody who has never written one before, therefore I’m going to give you a succinct manual you may utilize and follow. A who will compensate them due to their work that is hard, and realize them for their achievement. writing essays If you’ll believe this as a understanding procedure, it’ll assist you greatly. The steps of this type of investigation offer you the ability to discover areas of deal by means of your crowd in buy to are more effective. Each paragraph may get a topic word that will be among the factors to trust the thesis. In addition, a thesis does not require to be one particular phrase. It doesn’t should be in an official composition type or perfect sentences. For a standard 5 paragraph article, you’ll need a minimum of three reasons, or elements to your own answer. The question could be a segment of your introduction, or it may make a huge name. Therefore, this region of the proposition provides you a possiblity to establish to your own crowd the issue you’re addressing is worth addressing. Write a answer to that query. A quite simple thesis assertion may be some thing like’A decent head should have wisdom, outstanding judgment, and bravery.’ In a Exploratory papers, you’re requested to check at night clear solutions as a way to locate additional points of view which can occasionally assist in solving the dilemma. When you organize your composition and jot down the points you will look at in your write, you’ll have plenty of factors to discuss. Subsequent to the satisfaction then you must pick them regarding the finish of your own work. Numerous the approaches are going to require a extended moment. They don’t get lots of time to really consider creating an excellent essay creating possibility. Study composition as not a function limitation with deadlines along with a procedure. Prevent supposing the audience is acquainted with precisely exactly the same amount of expertise as you’re unless naturally you’re writing for your own happiness. The sum of research you happen to be needed to do may vary, contingent up on the subject. It requires a amazing amount of persistence and work to educate novices to compose. To help myself I decided the ordinary wordcount of the paperback book per site along with the well dimensions and did the t. Section of being truly a superior individual is aiding others become greater people. Easier said than done, it simply is legitimate to follow a particular pattern to earn the article an interesting study. Short paragraphs result in simple reading. This segment will supply you with the suggestions on writing a superb introduction. The best input writing a flourishing guide will be to consider who’ll be studying the manual. A scientific test WOn’t take a thorough, ornate prologue. Whenever you happen to be writing a dissertation, there’s an overall established structure which is adopted. For instance, in case you happen to be authoring’how to generate a document vessel’, attempt to explain the process in plain words. For essays that need study, be certain to are utilizing great high quality sources of advice. Research competitors to determine what services to provide and much to fee. There are numerous distinct issues that one may utilize in writing process documents. This list deals with several the straightforward to write article subjects. Secondly, be sure to comprehend what you’re asked to do in your article. The essays may cover every prospective issue below the sun. Interview essays enable you to use people as your own sources instead to novels. Simply make specific your essay doesn’t seem purely informative. Should you be going to write an interesting, exceptional composition, you’ll need to execute re Search. There are ample approaches to begin an article. For an additional, it appears like you didn’t just take some time to do that little additional investigating to come up with some additional recommendations to make this a really amazing essay. The writer will only realize the subject after devoting a great deal of time-on study work. Writing a post should, above all be a fulfilling experience for the individual composing it. Ensure that it stays brief, only the junctures like teaching essay writer and prior work experience must be included within the response. A readers’s notion plan of action needs to be invoked by means of an essay. This is undoubtedly the most typical interview issue that may likely be asked to any candidate. You could also highlight the essence of work and also your job responsibilities, in brief. Just then will the author have the opportunity to do complete justice to it. They lack useful background for the duration of their research. This was demonstrated by byrne in 1959. This really is valid especially for pupils which can be creating a scientific thesis. Within this sort of creating, you have to describe a slice of information from scratch. The following are the general instructions which you should follow, trained up on the kind of dissertation or research papers you’re composing. You must plainly describe the purpose of the experiment, and establish the process in a brief and precise style. In the strategy chapter, it’s crucial that you supply the reader with a fast summary of the way you were able to construct information in addition to material for your own document. Tekhnik mengetik memang tidak bisa lepas dari kehidupan manusia apalagi yang bekerja, mulai dari zaman mesin ketik sampai zaman komputer sekarang, Mungkin kalian yang berprofesi di depan layar komputer sudah tidak asing lagi dengan yang namanya keyboard, yah keyboard adalah papan ketik yang terdiri dari tombol-tombol yang mewakili huruf, angka dan simbol-simbol karakter, dan kalian sering mengetik menggunakan keyboard baik itu mengetik tugas akhir ataupun pekerjaan, adakah diantara kalian yang merasa kecepatan mengetik kalian agak lambat, Nah anda datang di blog yang tepat karena di artikel ini kita akan belajar cara mengetik 10 jari paling mudah untuk pemula. Menggunakan teknik cara mengetik 10 jari pertama kali bagi anda yang belum terbiasa menggunakan 10 jari pasti akan muncul rasa kaku, kesal karena anda harus menggunakan kesepuluh jari anda dan itu rasanya kaku, tapi memang seperti itulah awalnya karena bila kita sudah terbiasa mengetik 10 jari maka produktivitas anda meningkat karena anda bisa lebih cepat mengerjakan tugas atau pekerjaan anda. Mari kita simak beberapa tutorial dan video cara mengetik 10 jari dari Youtube, tenang anda tidak perlu merasa ragu tidak mampu, kita semua bisa lincah dan mahir asalkan tetap mau disiplin berlatih. Video tutorial cara mengetik 10 jari di Youtube sangat banyak jumlahnya, tapi kali ini video yang akan kami pakai adalah yang dibuat oleh Howcast.com, di video ini latihan yang harus ditekankan dan diperhatikan adalah posisi tangan ketika diletakkan di keyboard terutama kedua jari telunjuk dimana jari telunjuk kiri menekan tombol F dan jari telunjuk kanan menekan tombol J, ingat kedua itu adalah patokannya. Dalam video tersebut yang terpenting adalah posisi jari tangan seperti dijelaskan dibawah ini. Jari kelingking: Jari kelingking pada tangan kiri bertugas untuk menekan tombol huruf Q A Z, tombol [shift] bagian kiri dan tombol angka . Jari manis: Jari manis pada tangan kiri bertugas untuk menekan tombol atau tuts huruf W S X dan tombol angka . Jari tengah: Jari tengah pada tangan kiri bertugas untuk menekan tombol atau tuts huruf E D C dan tombol angka . Jari telunjuk: Jaru telunjuk pada tangan kiri bertugas untuk menekan tombol atau tuts huruf R T F V B dan tombol angka , . Ibu jari: Bertugas untuk menekan space bar. Posisi Jari di Tangan Kanan. Jari kelingking: Jari kelingking pada tangan kanan bertugas untuk menekan tombol huruf [P], tombol angka , tombol [;], serta tombol [shift] yang ada pada bagian kanan. Jari manis: Jari manis pada tangan kanan bertugas untuk menekan tombol huruf O L, tombol, dan tombol angka . Jari tengah: Jari tengah pada tangan kanan bertugas untuk menekan tombol huruf I K, tombol , ,dan tombol angka . Jari telunjuk: Jari telunjuk pada tangan kanan bertunangan untuk menekan tombol huruf Y U H J N M, dan menekan tombol angka , . Ibu jari: Sama dengan ibu jari pada tangan kiri, ibu jari pada tangan kanan juga bertugas untuk menekan space bar. Kita dapat melatih kecepatan mengetik kita di website typing.com ini, beberapa fitur yang menarik yang dapat ditemukan di website ini adalah ada games-games yang dapat membuat kita berlatih meningkatkan kecepatan dan keakurasian mengetik. Kita dapat juga menggunakan website ini untuk melatih kecepatan mengetik kita, dapat dikatakan website ini memberikan fasilitas yang lebih menarik, karena kita akan diberikan latihan bertahap, mulai dari baris keyboard yang atas dulu atau yang bawah sehingga kita akan dibuat terbiasa tahap demi tahap. Pada sebagian besar orang tentu sudah cukup puas bisa mengetik menggunakan 11 jari sebutan untuk orang yang pakai 2 jari saja untuk mengetik. Padahal mengetik dengan 10 jari tentu banyak keuntungan yang didapat, beberapa keuntungannya antara lain. Pekerjaan lebih efisien Dengan mengetik menggunakan 10 jari tentu dapat lebih cepat daripada dua jari karena semua tombol keyboard terbagi di setiap jari tangan, sehingga karena kita mengetik lebih cepat maka waktu menyelesaikan pekerjaan menjadi lebih cepat pula. Pekerjaan lebih efekti Dengan mengetik menggunakan 10 jari, maka kita hanya melihat ke layar komputer saja, tanpa perlu melihat ke keyboard sehingga waktu pekerjaan terfokus pada kegiatan mengetik bukan untuk melihat dan mencari tombol huruf atau angka di keyboard. Menjadi tidak mudah cepat lelah Badan kita menjadi tidak mudah lelah karena beban mengetik disalurkan kesemua jari tangan, dan dapat menjadikan pekerjaan lebih cepat selesai sehingga kita tidak stress karena pekerjaan menumpuk. People today begin businesses for a number of causes. No longer mail extended faxes if you do not initially call up the person or business to ensure that really a superb time five. Nearly everybody who owns a company knows that on the net and off-line promoting strategies happen to be quite diverse, and anyone that occurs to possess a small company sees that small small company marketing tactics could put only one more tier of frustration. Simply about any evening, a new company is introduced. You will before long discover that really a great excellent way to get different business, along with upsell to existing buyers. When your enterprise is prepared to travel, they can examine the operations and advise improvements. Company own considerably longer decision making procedure because there are plenty of stakeholders involved and a good deal more things to consider seeing that even the most basic transformation may include a huge result on something different. Setting up your own personal company is a demanding job. Position area businesses in Google will turn out to come to be significantly more competitive seeing that everyone is going to be engaged in basic citation building. A business degree in entrepreneurship will supply the academic foundation forced to develop into powerful at controlling your private company. When you’re ready to head out for the organization degree, it has the crucial to make the appropriate selection. It is vital to be aware that a organization management degree isn’t just for the purpose of experts who want to organize a group. Normally a bachelors degree running a business administration leads to entry level job in the organization. One thing virtually any business provider must perform is undoubtedly retain the services of an internet marketer. First-time small business owners are going to benefit from the support of expert small business consulting businesses. Small organization owners need that the internet features come to get the leading element with respect to advertising, and it has replaced most classic means of conversation and marketing with a bang. One things a little business proprietor will need to know is the fact brand promoters are equally as essential to a little bit business since they are to the much larger enterprises in existence. In the pursuing five decades, small small businesses proprietors will come for being very good more worried about SEO. Promotion is the a person most crucial element in the accomplishment of your business. If it involves little business advertising, lots of all of us are having information and technology overpower. Online tiny small organization advertising is truly essential to the lifeblood of a firm. Small companies advertising webpage articles that will help you grow your enterprise, cover matters just like digital marketing. A company is successful because it has to pay customers or clients within a considerable number. Small businesses have got followed social websites just for success by simply boosting the advertising and marketing tactics through online. Opening a company00 requires function that you may get unfamiliar with as a new small company owner. People today start off businesses for a number of causes. May send out prolonged faxes if you in the beginning call up the person or business to ensure that really a wonderful time your five. Practically everybody the master of a provider knows that internet and high street merchandising tactics are enormously diverse, and anyone who happens to experience a tiny company knows that small small companies marketing strategies could add just one more rate of misunderstandings. Merely about any kind of time, a new firm is introduced. You will in the near future discover that it is a great excellent method to sketch brand-new organization, along with upsell to existing clients. When your enterprise is ready to travel, they is going to evaluate the processes and advise modifications. Company possess a lot longer decision making process because presently there are lots of stakeholders included and a good deal more things to consider while even the lowest change might include an enormous impact on different things. Setting up the private enterprise is a complicated work. Rating area businesses in Google definitely will turn out to get somewhat more competitive because everyone is likely to be engaged in basic citation building. A business degree in entrepreneurship will source the educational bottom needed to develop into powerful at taking care of your individual company. Before you go to head out for the business degree, it has the crucial to make the correct assortment. It is crucial to be aware that a business management level isn’t merely just for professionals who wish to watch over a group. Normally a bachelors level in corporate administration ends in post level job in the business. The very first thing virtually any organization driver needs to carry out can be retain the services of an internet marketer. New small business owners are heading to enjoy the support of expert small enterprise consulting businesses. Small organization owners want to see that the world wide web includes come to become the superior component for advertising, and that has substituted most traditional means of connection and advertising with a screw. One items a minimal company owner will need to know is the fact brand advocates are equally as important to just a little business since they are to the much larger corporations to choose from. In the pursuing five decades, small small businesses proprietors will arrive for being even more concerned about SEO. Advertising is the you most important aspect in the achievement of your small company. If it reaches small business marketing, lots of us are experiencing information and technology overpower. Online small small organization advertising is vitally important to the lifeblood of a firm. Small enterprise advertising site articles that will help you grow the provider, overlaying matters like digital advertising. A business is good since it contains to give customers or clients in a considerable quantity. Small businesses have adopted social media intended for success simply by boosting all their advertising and marketing strategies through social networking. Opening small businesses requires do the job that you may come to be unfamiliar with like a new small company owner. People today start businesses for a number of factors. Avoid give extended faxes until you in the beginning contact the person or perhaps business to guarantee that it has the a superb time your five. Practically everybody the master of a firm knows that via the internet and offline merchandising tactics happen to be widely unique, and anyone who occurs to experience a small company knows that small internet business marketing approaches could put just one more rate of indecision. Simply just about any kind of daytime, a new business is released. You will shortly discover that it can an excellent method to sketch latest organization, along with upsell to existing buyers. Once your business is ready to head out, they should evaluate the procedures and advise changes. Business’ have a lot longer decision making method because right now there are plenty of stakeholders included and a whole lot more things to consider mainly because even the most basic modification may well own a tremendous effect on different things. Setting up the very own company is a difficult work. Standing area businesses in Google will certainly turn out to be considerably more competitive simply because everyone is going to be engaged in basic citation building. A business degree in entrepreneurship will source the academic platform forced to develop into good at managing your personal company. Before you go to choose to get a business degree, it’s actually crucial to associated with right selection. It is vital to be aware that a organization management degree isn’t just with regards to specialists who would like to organize a group. Normally a bachelor level in operation managing produces access level opportunities in the organization. Most important factor any kind of organization provider has to perform is definitely employ the service of an internet marketer. First-time small business owners are heading to take advantage of the support of expert small company consulting organizations. Small business owners need that the internet possesses come to get the predominant element with regards to advertising, and that has changed most traditional means of connection and marketing with a beat. One points a tiny company owner will need to know is the fact brand champions are equally as important to a little bit business considering they are to the much larger companies to choose from. In the following five many years, small small businesses proprietors will arrive to become very good more concerned about SEO. Promo is the an individual most essential element in the results of your small company. If that reaches little business advertising, lots of us are experiencing information and technology overwhelm. Online tiny small organization campaign is truly essential to the lifeblood of a organization. Small enterprise advertising site articles to help you grow the provider, covering up issues just like digital marketing. A business is successful because it has got to shell out customers or perhaps clients in a considerable amount. Small businesses in addition have followed social websites for the purpose of success simply by boosting all their advertising and marketing approaches through social media. Opening a company00 requires function that you may become unfamiliar with as a new small business owner.
2019-04-19T22:53:42Z
https://www.caramudah.id/category/uncategorized/page/3/
Systems and methods consistent with the present invention provide a data processing system for extending the business data associated with a collaboration tool engine to include spatial reference information for collaborative visualization. The engine has business data type schemas for generating an information store container associated with a respective business data element. A control specification having an information store type schema identifying a corresponding spatial data type schema is provided to the engine. In response to a request, a new information store is generated based on the business type schemas. A roster identifying the information store type schema is provided to the requester. In response to another request, the data processing system generates, via the collaboration tool engine, a spatial reference point container in the new information store for the information store type schema based on the spatial data type schema identified by the information store type schema. This application claims the benefit of the filing date of U.S. Provisional Application No. 60/756,827, entitled "A System and Method for Extending the Business Data Associated with a Network Based User Collaboration Tool to Include Spatial Reference Information for Collaborative Mapping," filed on January 6, 2006, which is incorporated herein by reference to the extent permitted by law. The invention relates generally to network-based collaboration tools, and more particularly to systems and methods for enabling a collaboration tool to generate and map or otherwise visualize geographically referenced information. Standard network-based user collaboration tools, such as Microsoft's SharePoint®, allow users to collaboratively generate, store, and display business data or information (e.g. SharePoint- lists, images, forms, and documents), associated with a defined project or business data model. However, SharePoint currently lacks the ability to allow users to associate geographical references with the collaboratively generated business data and therefore lacks the ability to map or three dimensionally visualize the geographical references so that users can graphically view newly generated, modified, or removed geographical references associated with the given business data model. Systems and methods consistent with the present invention extend the capabilities of a network-based user collaboration tool to associate spatial reference information with business data stored in information stores associated with the collaboration tool. The term "information store" as used herein is intended to encompass any mechanism provided by a particular collaboration tool for storage of, or direct or indirect access to, business data irrespective of the collaboration tool's vendor-specific terminology or underlying method(s) of technical realization. In particular, a system and method consistent with the present invention extends Microsoft SharePoint stored list, image, form, and document data to be geographically aware. The method operates such that street addresses in SharePoint information stores are automatically converted to the corresponding latitude & longitude in near-real time without user interaction. Systems consistent with the present invention also provide a web-based geographical information system ("GIS") viewer application which merges conventional GIS infrastructure layers, viewing controls, and mapping and visualization techniques with such SharePoint - stored data, including dynamic discovery of all GIS-enabled data within a SharePoint installation. Hereafter the term "visualization" will refer to a computerized rendering of two- dimensional maps, aerial photograph overlays, and mosaics as well as three-dimensional images, photographs, montages, fly-throughs, "bird's eye views", and other known three- dimensional images. In addition, integrated linkages between the collaboration tool (e.g. SharePoint ) and the viewer web application are provided in systems consistent with the present invention such that users can switch at will between viewing data points in either the collaboration tool's business data display or the web-based GIS viewer web application. In accordance with methods consistent with the present invention, a method in a data processing system for extending the business data associated with a network-based user collaboration tool engine to include spatial reference information for collaborative visualization is provided. The collaboration tool engine has one or more business data type schemas for generating an information store container associated with a respective business data element. The method comprises providing a control specification to the collaboration tool engine. The control specification has one or more information store type schemas. Each information store type schema identifies a corresponding spatial data type schema in association with a geographical data type schema. The method further comprises receiving a request to generate a new information store; generating, via the collaboration tool engine, the new information store based on the one or more business type schemas; providing to the requestor a roster identifying the one or more information store type schemas; receiving a second request to add the one or more information store type schemas to the new information store; and in response to the second request, generating, via the collaboration tool engine, a spatial reference point container in the new information store for each of the one or more information store type schemas based on the spatial data type schema identified by the respective information store type schema. In accordance with systems consistent with the present invention, a data processing system is provided. The data processing system comprises a collaboration tool system that includes a secondary storage having a control specification. The control specification has one or more information store type schemas. Each information store type schema identifies a corresponding spatial data type schema in association with a geographical data type schema. The collaboration tool system further includes a memory having a collaboration tool server and a geo-code driver operatively connected to the collaboration tool server and operatively configured to communicate with a geo-coding source. The collaboration tool server is operatively configured to control a collaboration tool engine based on the control specification, receive a request to generate a new information store, generate the new information store based on the one or more business type schemas via the collaboration tool engine, provide to the requestor a roster identifying the one or more information store type schemas, receive a second request to add the one or more information store type schemas to the new information store, and, in response to the second request, generate a spatial reference point container in the new information store for each of the one or more information store type schemas based on the spatial data type schema identified by the respective information store type schema. The collaboration tool system further includes a processor to run the collaboration tool server and the geo-code driver. Fig. 4 depicts a flow diagram illustrating a process performed by the collaboration tool server to allow a user to create and update a record in a GIS-enabled information store in accordance with the present invention; and Fig. 5 depicts a flow diagram illustrating a process performed by the collaboration tool server and the web-based GIS viewer to allow user to selectively view a visualization of one or more GIS-enabled information stores in accordance with the present invention. Fig. IA is a block diagram of a data processing system 100 having a collaboration tool system 102 enabled in accordance with the present invention to generate, store, and display geographical reference points associated with business data generated or accessed by or accessible through the collaboration tool system. Fig. IB is an exemplary functional block diagram of the image processing system 100 of Fig. IA. The data processing system 100 includes one or more client computers 50a-50n that are operatively connected via a network 51 to the collaboration tool system 102. The client computers 50a-50n may be any general- purpose computer system such as an IBM compatible, Apple, or other equivalent computer. The network 51 may be any known private or public communication network, such as a local area network ("LAN"), WAN, Peer-to-Peer, or the Internet, using standard communications protocols. The network 51 may include hardwired, as well as wireless branches. As shown in Fig. 1, the collaboration tool system 102 comprises a central processing unit (CPU) 104, an input output (I/O) unit 106 for communicating across the network 51, a memory 108, a secondary storage device 110, and a display 112. The collaboration tool system 102 may further comprise standard input devices such as a keyboard 114, a mouse 116 or a speech processing means (not illustrated). The various components of the collaboration tool system 102 maybe physically located remotely from each other and connected via the network 51. Memory 108 stores a collaboration tool server 120, a geo-code driver 122, and web- based GIS viewer 124. As discussed in detail below, the collaboration tool server 120 encapsulates or extends the capabilities of an existing collaboration engine 126 to enable one or more users operating client computers 50 to create, access, and modify an information store 128a-128n so that spatial reference information and corresponding geographical reference point are associated with business data in the information store 128a-128n. In one implementation, the collaboration tool system 102 may be operatively connected to a database 130 for storing information stores 128a-128n generated by the collaboration tool server 120 as further described herein. However, the information stores 128a-128n may be contained in memory 108, secondary storage 110, or in a remote storage (not shown in figures) across the network 51. The collaboration tool system 102 implemented in accordance with the present invention is referenced herein as a GIS-Enabled SharePoint® system. However, the present invention may be employed using other known collaboration tools such as Oracle Collaboration Suite or IBM Lotus Notes/Domino. As shown in Fig. 1, the data processing system 100 also includes a control file or specification 132. The term "specification" is used herein to generically refer to a persistent storage facility, interface, and content format operatively configured in accordance with the existing collaboration tool engine 126. Each control specification 132 serves to define and control the GIS-specific extensions (e.g., geographical reference point 254 in Fig.2) to the collaboration tool engine 126's information stores 128a-128n. The control specification 132 has one or more information store definitions 133a-133n each of which identifies a corresponding business data element type (e.g., city emergency response entity, such as a fire department) in association with a spatial reference point type (e.g., street address). For example, as shown in Fig. 2, each information store definition 133a-133n of the control specification 132 may correspond to a respective information store type schema 200a-200n having a structure consistent with data types recognizable by the collaboration tool engine 126 such that the collaboration tool engine may be prompted to instantiate or form an information store 128a-128n in accordance with a request from a user accessing the collaboration tool server 120. In the implementation shown in Fig. 2, the control specification 132 has one or more information store type schemas 200a~200n, each of which defines the schema for a particular type of information store 128a-128n. Each information store type schema 200a-200n includes two sub-schemas, a spatial data type schema 210 and a geographical data type schema 220. The spatial data type schema 210 includes one or more spatial data field schemas 215a-215n which define the attributes (data type, storage size, etc) of specific spatial data fields, such as street address, postal code, country, or other spatial reference information. The geographical data type schema 220 includes one or more geographical data field schemas 225a-225n which define the attributes (data type, storage size, etc) of the specific geographical data fields, such as latitude, longitude, point type, display icon, or other geographical reference information. 128a-128n based on a user or application 70 request as described in further detail herein. The content and structure of business data type schemas 230a-230n is based on the vendor and version of the collaboration engine 126. The business data type schemas 230a-230n may be hierarchical and/or extendable. In one implementation, the structure and format of information store type schemas 200a-200n and business data type schemas 230a-230n are substantially the same. The collaboration tool server's 120 collaboration tool engine 126 processes the control specification 132 in conjunction with the business data store schemas 230a-230n provided during the installation of the collaboration tool server 120 or by the end user operating the client computer 50 to construct or extend GIS-enabled information stores 128a- 128n in accordance with the control specification 132 as further discussed below. Thus, the collaboration tool server 120 effectively enables user selectable business data schema(s) (e.g., 23On in Fig. 2), the spatial schema (e.g., 210) and the geographical schema (e.g., 220) into a single GIS-enabled schema from which a respective information store (e.g., 128n in Fig. 2) may be instantiated by the collaboration tool server 120 via the collaboration tool engine 126. As shown in Fig. 2, when the collaboration tool server 120 causes the information store 128n to be instantiated, the information store 128n includes one or more data rows 240a-240n. Each data row 240a-240n includes a business data portion 250 corresponding to the selected business data type schema(s) 230a-230n, spatial reference point portion(s) 252 corresponding to the selected spatial data type schema(s) 210, and geographical reference point portion(s) 254 corresponding to the selected geographical data type schema(s) 220. GIS-Enabled SharePoint® 2007, the control specification comprises one or more SharePoint® List Templates and also SharePoint Content Type Definition Features, which collectively may be used to implement one or more information store type schemas 200a-200n as discussed herein. The SharePoint® List Templates and SharePoint Content Type Definition Features are physically implemented as XML data files whose content format is conventional XML and SharePoint's proprietary dialect (i.e., CAML). These files are physically stored on the machine(s) hosting the collaboration engine 126. Implementation details for SharePoint Content Type Definition Features are available from Microsoft at http://msdn2.microso.ft.com/en-us/library/ms434313.aspx and http://msdn2.microsoft.com/en-us/library/ms460318.aspx. The collaboration tool engine 126 may, depending on the specific brand, model, version, and configuration of the collaboration tool engine 126, provide access to auxiliary business data elements 134a-134n stored, for example, in a database 136 associated with a particular business data source of an enterprise or company. Without limitation, examples of auxiliary business data elements 134a-134n include enterprise customer relationship management ("CRM") systems, employee data (Human Relations) systems, government agency caseload management systems, hospital medical records systems, and other known business data elements. The database 136 may be accessible by the collaboration tool server 120 across the network 51 or via a local or a respective private network 137a-137n. When operatively configured via auxiliary business data specifications 138a-138n, such data is made accessible to users of the collaboration tool engine 126 and hence users of the collaboration tool system 102, in a manner which largely or completely mimics that of information actually stored within the collaboration tool engine 126. The collaboration tool engine 126 may use any technical means to catalog, retrieve, display, and update such auxiliary business data that may itself reside in any storage or access technology interoperable with the collaboration tool engine 126. The format and structure of an auxiliary business data specifications 138a-138n is based on the specific vendor and version of the collaboration tool engine 126. Typically an auxiliary business data specification 138 will include information elements describing the technical method of communication (e.g., direct SQL database connection, XML web service, RPC, or other communication technique.), the location of the source server (e.g. address, DNS name, server name, database identity, or other source server location information); the security protocol and security identity (e.g., type of authentication, user logon, user password or certificate, encryption requirements and protocols); the specific data elements to retrieve or update (e.g. named XML service methods, SQL tables, views, stored procedures, etc. and their corresponding input parameter requirements and result set schemas, such as field names, data types, data sizes, and data relationships), any translation between the data types and schemas supported by the source and that supported by the collaboration tool engine 126. Additional similarly-structured elements may provide information required for the collaboration engine 126 to perform updates to the auxiliary business data elements 134a-134n. When auxiliary business data elements 134a-134n are available or stored on an auxiliary business data source database 136a-136n and configured for access by the collaboration tool server 120 via a respective auxiliary business data source specification (e.g., 138a in Fig. 2 ) that includes auxiliary business data type schemas 260a-260n , the collaboration tool server 120 processes the auxiliary business data source specification 138a to effectively cause the collaboration tool engine 126 to generate an information store (e.g., information store 128a) substantially equivalent to an information store (e.g., information store 128n) generated by the collaboration tool engine 126 using the control specification 132 as discussed above. Thus, in accordance with the present invention, the collaboration tool server 120 is operatively configured to extend the capabilities of the collaboration tool engine 126 so that the collaboration tool engine 126 is equally capable of operating on auxiliary business data 134a-134n. Accordingly, an information store 128a-128n, as used throughout, may include information stores generated based on auxiliary business data and information stores generated based on business data associated with the information store definitions 133a-133n or information store type schemas 200a-200n in the control specification 132. A user, via a browser 52 on a client computer 50, may access the collaboration tool server 120 to create, edit or delete information stores 128a-128n (e.g. SharePoint® lists, libraries, etc.) hosted on the collaboration tool server 120, and to attach or detach GIS- enablement to them as described in further detail below, for example, in reference to Fig. 3. After processing the control specification 132, the collaboration tool server 120 effectively causes the collaboration tool engine 126 to be operatively configured to expose a new or amended GIS-Enabled information store 128a-128n in accordance with a respective information store definition 133a-133n in the control specification 132. As shown in Fig. 2, each information store 128a-128n functions as a container for data rows 140a-140n, each of which has capacity for storing both arbitrary business data 250, a corresponding spatial reference point 250 and a geographical reference point 254. Each information store 128a- 128n generated by or accessible by the collaboration tool server 120 via the collaboration tool engine 126 immediately becomes accessible to the geo-coding driver 122 and the web-based GIS viewer 124. An external application 70, of whatever description(s) or purpose(s), running on external application server 72 may access the collaboration tool server 120 to create, edit or delete information stores 128a-128n (e.g. SharePoint® lists, libraries, etc.) hosted on the collaboration tool server 120, and to attach or detach GIS-enablement to them as described in further detail below, for example, in reference to Fig. 3. After processing the control specification 132, the collaboration tool server 120 effectively causes the collaboration tool engine 126 to be operatively configured to expose a new or amended GIS -Enabled information store 128a-128n in accordance with a respective information store definition 133a-133n in the control specification 132. As shown in Fig. 2, each information store 128a- 128n functions as a container for data rows 140a-140n, each of which has capacity for storing both arbitrary business data 250, a corresponding spatial reference point 250 and a geographical reference point 254. Each information store 128a-128n generated by or accessible by the collaboration tool server 120 via the collaboration tool engine 126 immediately becomes accessible to the geo-coding driver 122 and the web-based GIS viewer 124. One of ordinary skill in the art will appreciate that the access technique implemented by the collaboration tool server 120 and the external application 70 may vary depending on the particular collaboration tool engine 126 employed by the collaboration tool server 120. For example, when the collaboration tool engine 126 employed by the collaboration tool server 120 is SharePoint® and extended or encapsulated in accordance with the present invention, an external application 70 may use an XML web service and/or a remote procedure call (RPC) technique to communicate with the collaboration tool server 120. A user, via a browser 52 (such as Internet Explorer® or Netscape®) on a client computer system 50, may access the collaboration tool server 120 to create and/or edit a GIS-enabled data row 240 associated with GIS-enabled information store 128a-128n (e.g. save, edit, & delete SharePoint®-stored list items, forms, and documents) hosted on the collaboration tool server 120 as further described herein, for example, in reference to Fig. 4. An external application 70, of whatever description(s) or purpose(s), running on external application server 72 may access the collaboration tool server 120 via the network 51 to create and/or edit GIS-enabled data containers or rows 240 associated with GIS-enabled information store 128a-128n (e.g. save, edit, & delete SharePoint®-stored list items, forms, and documents) hosted on the collaboration tool server 120. One of ordinary skill in the art will appreciate that the access technique implemented by the collaboration tool server 120 and the external application 70 may vary depending on the particular collaboration tool engine 126 employed by the collaboration tool server 120. For example, when the collaboration tool engine 126 employed by the collaboration tool server 120 is SharePoint® and extended or encapsulated in accordance with the present invention, an external application 70 may use an XML web service and/or a remote procedure call (RPC) technique to communicate with the collaboration tool server 120. In addition, an external application 70 may mine external data sources for items of interest or gather data from other line-of- business applications (not shown in figures) and use the collaboration tool system 102 as a publishing or distribution channel or as an archive by generating GIS-enabled information stores 128a-128n via the collaboration tool server 120. As shown in Figs. IA and IB, the collaboration tool server 120 also includes one or more event trigger/response components or modules 140 (e.g., GIS-Enabled SharePoint® Event Triggers or Features) each of which monitors a predetermined location or collection of information store 128a-128n locations, such as a SharePoint® data store location, where respective GIS-enabled information stores 128a-128n may have been created and stored by the collaboration tool server 120. Each of the event trigger/response modules 140 is operatively configured to react to or generate an event trigger upon the creation or deletion of GIS-enabled information stores 128a-128n, the creation or deletion of GIS-enabled records/documents within each of the information stores 128a-128n (e.g., business data row 250 information), and the update of GIS -significant fields (e.g., spatial reference point 252 or geographical reference point 254) within or associated with such records/documents. The event trigger mechanism of the event-trigger modules may be implemented across any technology base associated with the collaboration tool engine 126 encapsulated or implemented in the collaboration tool server 120. In one implementation in which the collaboration tool engine 126 is Microsoft® SharePoint,® the collaboration tool server 120 may utilize several Microsoft® technologies to implement the event trigger mechanism of the event trigger/response modules 140, including but not limited to SQL Server, MSMQ, .NET XML web services, and the SharePoint® object model and SharePoint® event facility. The conversion of a street address, street intersection , or similar culturally-described location (i.e. "a spatial reference point") to a latitude/longitude pair (i.e. a "geographical reference point" or "X/Y pair") is conventionally termed "geo-coding". Geo-coding can be performed by any number of publicly accessible geo-coding vendors such as ESRI, Google Maps®, or Microsoft® Virtual Earth®, as well as by privately-owned & locally installed specialist software from a similar array of vendors. Collectively these facilities are referred to herein as "geo-coding sources" or "geo-coding source systems". Particular geo-coding source systems may also return additional geographic information such as altitude; perform additional geographic services such as address correction; provide auxiliary cultural information, and other geographic related information. In each case data returned is applicable to the particular X/Y coordinate. The geo-code driver 122 includes a source-agnostic geo-coding interface 144 and a plurality of source-specific geo-coding connectors 146. The geo-coding interface 144 is operatively configured to receive an event triggered by a respective event trigger/response module 140 of the collaboration tool server 120 and transfer the event to a corresponding one of the geo-coding connectors 146. Each geo-coding connector 146 is operatively configured to communicate with a respective geo-coding source system 150 using one of a variety of mechanisms, such as an XML web service, an RPC, or SQL record or file system drop. Regardless of the mechanisms used, in response to receiving an event from a respective event trigger/response module 140 via the geo-coding interface 144, each geo-coding connector 146 provides to the respective geo-coding source system 150 an address derived from the GIS-enabled information store 128a or 128n having the newly created or changed GIS- signifϊcant field (e.g., new or modified spatial reference point 252). In return, the respective geo-coding connector 146 is operatively configured to receive from the respective geo-coding source system 150 a geographic reference point (e.g., a latitude and a longitude, plus other source-specific optional geographic attributes) corresponding to the spatial reference point associated with the GIS-enabled information store record or document having the newly created or changed GIS -significant field. The respective geo-coding connector 146 is adapted to post or store, via the source agnostic geo-coding interface 144, the geographic reference point 254 data in association with the GIS-enabled information store 128a or 128n record or document which caused the event to be triggered. In one implementation, for example when the collaboration tool server 120 extends or encapsulates SharePoint® 2003 as the collaboration tool engine 126 consistent with the present, the event trigger/response module 140 is operatively configured to monitor for the collaboration tool engine 126 to update (e.g., read from or to write to) or to delete the spatial reference point 252 or the geographical reference point 254 associated with a respective information store 128a-128n. When the information store 128a-128n is updated by the collaboration tool engine 126, the event trigger module/response module 150 communicates to a geo-coding interface 144 of the geo-coding driver 122 as further described below so that the corresponding geographical reference point 254 associated with a respective information store 128a-128n may be updated to reflect any change to the spatial reference point 252 of the information store 128a-128n. In another implementation , for example when the collaboration tool server 120 extends or encapsulates SharePoint® 2007 as the collaboration tool engine 126 consistent with the present invention, the event trigger/response module 140, the collaboration tool engine 126 includes an event subscription interface 142. In this implementation, the control specification 132 may include an event subscription 270 associated with an information store type schema (e.g., 20On) as shown in Fig. 2. The event subscription 270 causes the collaboration tool engine 126 to assign an event within the event subscription interface 142 to the event trigger/response module 140 so that when an information store (e.g., 128n) corresponding to the information store type schema (e.g., 20On) and the event subscription 270 is updated by the collaboration tool engine 126, the event subscription interface 142 communicates to the respective update event to the event trigger/response module 150. In response, the event trigger/response module 150 communicates the update event to the geo- coding interface 144 as further described below so that the corresponding geographical reference point 254 associated with a respective information store 128a-128n may be updated to reflect any change to the spatial reference point 252 of the information store 128a-128n. In accordance with the present invention, a user, via the browser 52 on the client computer system 50, may also access the web-based GIS viewer 124 to view and/or manipulate a visualization of GIS-enabled data (e.g., spatial reference point 252 or geographical reference point 254) associated with GIS-enabled information stores 128a-128n created and managed by the collaboration tool server 120. The web-based GIS viewer 124 includes a display/control application, which may be an extension to a conventional web- based GIS viewer/editor application or toolkit, such as a web-based GIS viewer/editor commercially available from ESRI, Google®, or Microsoft®. In addition to offering the pan, zoom, search, select, and view options of a typical GIS viewer/editor, the web-based GIS viewer 124 has at least the following two key user selectable extensions: secure dynamic discovery ("SDD" interface 194 to SDD component of the collaboration tool server 120) and secure point retrieval ("SPR") sub-process. URL associated with one or more GIS-enabled information stores hosted on the collaboration tool server 120 may see different data layers in a visualization generated by the web-based GIS viewer 124 depending on the personal security authority or profile (not shown in figures) associated with the user. SDD communication may be accomplished via an XML web service. In one implementation, the SDD interface 194 of the web-based GIS viewer is further augmented with a manual configuration of a persistent SharePoint®-based layer to provide specific repeatable graphic symbology and/or detailed data point categorization within the layer. Absent manual configuration, the SDD assigns unique symbology to each layer. SPR extension of the collaboration tool server 120 is operatively configured to allow the web-based GIS viewer 124 to retrieve points (e.g., geographic reference points 254 and business data attributes stored in the associated business data rows 250 of respective information stores 128a-128n) stored within the collaboration tool server's 120 installation site or within the data processing system 100. The web-based GIS viewer 124 is operatively configured to then display the retrieved points and associated business attributes in situ on the visualization web page provided to the browser 52 of the client computer 50 operated by the user. The GIS viewer 124 is also operatively configured to provide user interface (UI) linkages back into the associated GIS-enabled information store record hosted by the collaboration tool server 120. The UI linkages connect the user's browser 52 directly to the source data. In one implementation, the collaboration tool server 120 incorporates an extract-and- save capability. This capability permits users to identify points on a visualization generated from local data layers 164 & GIS-enabled information stores hosted on the collaboration tool server 120; and to copy such identified points into alternative existing or newly created GIS- enabled information stores hosted on the collaboration tool server 120. Turning to Fig. 3, a flow diagram is shown illustrating a process 300 performed by the collaboration tool server 120 to allow a user to generate a GIS-enabled information store 128a-128n having the capacity to store business data rows 250 coupled with a spatial reference points 252 and corresponding geographical reference points 254 consistent with the present invention. Initially, the collaboration tool server receives a request to create an information store from a requestor (e.g., a user operating client computer 50 or a network external application 70) (step 302). In one implementation, this request takes the form of the browser 52 posting a web page (not shown in figures) to the collaboration tool server 120 providing the location and name for the desired information store 128a or 128n as well as an initial information store schema from among the business data type schemas 230a-230n and the information store type schemas 200a-200n, which includes GIS-enabled information store type schemas characterized by or having spatial data type schema 210 and a corresponding geographical data type schema 220. In an alternate implementation, the request takes the form of an XML web service call (not shown in the figures) from an external application 70 to the collaboration tool server 120 providing the location and name for the desired information store 128a or 128n as well as the desired initial information store schema from among the business data type schemas 230a-230n and the information store type schemas 200a-200n, which includes GIS-enabled information store type schemas characterized by or having spatial data type schema 210 and a corresponding geographical data type schema 220. One of ordinary skill in the art will appreciate that additional or differing means of accomplishing the logically equivalent outcome may exist using various collaboration engines 126 and the collaboration tool server 120. GIS-enabled information store type schemas to the requestor (step 306). In one implementation, the collaboration tool server 120 responds to the web page post of step 302 by replying with a web page (not shown in figures) indicating success in the creation of the information store 128a and showing the initial information store schema associated with the information store 128a. In addition, the web page may include a display of additional available business data type schemas 230a-230n and information store type schemas 200a- 20On, including GIS-Enabled information store type schemas having spatial data type schema 210 and a corresponding geographical data type schema 220 derived from the control specification 132. Upon receipt of the display of such schemas, the user may elect to cause his browser 50 to request that the collaboration tool server 120 add a plurality of such schemas, including GIS-enabled information store type schemas, to the schema of the instant information store (e.g. 128a). In an alternate implementation in which the requestor is an external application 70, the roster 190 may be provided in the form of an XML web service response back to the external application 70 from the collaboration tool server 120 indicating success in the creation of the information store and showing the initial information store schema associated with the information store 128a. Additional XML web services and web service methods may be used by the collaboration tool server 120 to make available to the external application 70 (on request) an enumeration of the available business data type schemas 230a-230n. and the information store type schemas 200a-200n, including GIS- Enabled information store type schemas derived from control specification 132. Upon receipt of the enumeration of such schemas, the external application 70 may elect to request that the collaboration tool server 120 add a plurality of such schemas, including GIS-Enabled schemas, to the schema associated with the instant information store (e.g. 128a). One of ordinary skill in the art will appreciate that additional or differing means of accomplishing the logically equivalent outcome may exist using various collaboration engines 126 and the collaboration tool server 120. Next, the collaboration tool server 120 receives a request from the requestor to add one or more of the GIS-enabled information store schemas to the information store (step 308). In one implementation, this request takes the form of the browser 52 posting a web page (not shown in figures) to the collaboration tool server 120 providing the identity of the information store 128a as well as the identity(ies) of a plurality of desired additional information store schema(s) from among the business data type schemas 230a-230n. and the information store type schemas 200a-200n, including GIS-enabled information store type schemas having a respective spatial data type schema 210 and a corresponding geographical data type schema 220. In an alternate implementation the request takes the form of an XML web service call from an external application 70 to the collaboration tool server 120 providing the identity of the information store 128a as well as the identity(ies) of a plurality of desired additional information store schema(s) from among the business data type schemas 230a-230n. and the information store type schemas 200a-200n, including GIS-enabled information store type schemas having a respective spatial data type schema 210 and a corresponding geographical data type schema 220. One ordinarily skilled in the art can readily appreciate that other communication techniques for receiving a request from a user or an external application other than a web page or an XML web service call may be employed without departing from the scope of the present invention. The collaboration tool server 120 then generates a container (e.g., information store field 280a-280n) in the information store based on each requested GIS-enabled information store schema (step 310) before ending processing. In one implementation, the collaboration tool server 120 causes the collaboration engine 126 to locate the schema (e.g., business data type schemas 230a-230n and/or information store type schemas 200a-200n) associated with the referenced information store (e.g. 128a) from its information store database 130, and for each requested GIS-enabled information store schema, access the requested schema (e.g., 200n) from the control specification 132 and generate corresponding entries in its information store database 130 to implement the information store fields 170a-170n based on the respective requested schema (e.g., including spatial data field schemas 215a-215n and geographical data field schemas 225a-225n) by recording its respective attributes (e.g. name, data type, storage capacity, and display format) and other information store attributes as may exist in the respective information store type schema 200a-200n (e.g. display options, security restrictions, etc.) Finally, entries are made in its information store database 130 to add additional data container(s) or row(s) 240a-240n to the newly created information store (e.g. 128a) based on the requested GIS-enabled information store schema. Note that the process of steps 306 - 310 is, or can be, iteratively performed by the collaboration tool server 120. Fig.4 depicts a flow diagram illustrating a process 400 performed by the collaboration tool server 120 to allow a user to create or update a data container or row (e.g. 240a) in a GIS-enabled information store 128a-128n in accordance with the present invention. Initially, the collaboration tool server receives a request from a requestor (e.g., a user operating client computer 50 or a network external application 70) to create or update a data row (e.g. 240a) (Step 402). In one implementation, this request takes the form of the browser 52 posting a web page to the collaboration tool server 120 providing the location and identity for the relevant information store (e.g. 128a) and the identities and values for business information store fields (e.g., 280a-280j) and spatial reference point information store fields (e.g., 280k- 28Oq) associated with the schema (e.g., 230a-230n and 200a-200n) associated with the instant information store (e.g. 128a). For updates to an existing data container or row (e.g., 240a), the identity of such data row is also provided. In an alternate implementation the request takes the form of an XML web service call from an external application 70 to the collaboration tool server 120 providing the equivalent information. One of ordinary skill in the art will appreciate that additional or differing means of accomplishing the logically equivalent outcome may exist based on the various collaboration engines 126 used by the collaboration tool server 120. Next, the collaboration tool server 120 then locates the requested information store and creates or updates the data container or row (e.g. 240a) as requested (Step 404). In one implementation, the collaboration tool server 120 requests the collaboration engine 126 to locate the schema of the referenced information store (e.g. 128a) from its information store database 130, then, in the case of an update, locate the respective data container or row (e.g., 240a) from the information store (e.g., 128a). In the case of an update, the information store type schema (e.g., 20On) associated with or used by the collaboration tool server 120 to instantiate the respective data container or row is referenced to match the data element names & values provided in the update request with the underlying physical storage of the information store database 130, and to direct the information store database 130 to store such updated values in the matched physical locations for the data row (e.g., 24On). To create a new data container or row (e.g., 24On) the collaboration tool server 120 performs a similar process as an update using the collaboration tool engine 126; the schema 230a-230n and/or 200a-200n associated with the respective information store 128a-128n is referenced to match provided data elements and values provided in the create request with the underlying physical storage of the information store database 130, followed by directing the information store database 130 to create a new data container or row (e.g., 24On) with the new values in the matched physical information store fields 170a-170n of the new data container or row. performs steps 420-424 as discussed below. After performing either steps 410-414 or steps 420-424, the collaboration tool server 120 continues processing at step 430. Next, the event response module 141 determines whether the event occurrence or the notice of change corresponds to a GIS-enabled data container or row update (e.g., spatial reference point 252 update), and if so, communicates the event occurrence or notice of change to the geo-coding interface 144 (step 414). After determining that the collaboration tool engine 126 does have an event subscription interface 142, the event subscription interface 142 receives notice of the new/updated data container or row (e.g., 240a) (Step 420) as previously described in reference to event subscription 270. In one implementation, the collaboration engine 126's event subscription interface 142 stores information about event subscriptions 270 in secondary storage 110 and/or memory 108. One ordinarily skilled in the art can readily appreciate that the specificity of event subscriptions may vary between differing collaboration engines 126. Next, the event subscription interface 142 communicates the event occurrence or the notice of change to the event response module 141 (step 422). In one implementation, when operatively configured, the present invention's event subscription is retrieved by the event subscription interface 142 in response to any data row creation/ update event affecting an information store 128a-128n associated with the event subscription 270. The event subscription interface 142 then communicates the event occurrence or notification of change to the event response module 141 by instantiating an copy of such module and passing to it a notification containing the nature of the event and the identity of the information store (e.g. 128a) and data row (e.g. 240a) associated with the event. One ordinarily skilled in the art can readily appreciate additional or differing functionally equivalent possibilities which may exist when the present invention is applied to differing collaboration engines 126. The geo-coding interface 144 then identifies the corresponding source-specific geo- coding connector 146 from the operable configuration and context associated with the respective information store (e.g., 128a), and passes the spatial reference point 252 data to the identified geo-coding connector (Step 432). In one implementation, the geo-coding interface 144 references the information store schema 128a to determine the corresponding spatial data type schema 210 from the control specification 132 and thereby determine the appropriate geo-coding connector 146. In another implementation, the configuration information may be stored internal to the collaboration tool server 120 to identify a geo-coding connector 146 for each instantiated information store 128a-128n. Next, the geo-coding connector 146 contacts the geo-coding source system 150, provides it the spatial reference data and in turn receives geographical reference data, geo- coding status / results message, plus optional additional geographical attributes (Step 434). The geographical reference data, geo-coding status / results message, plus such additional geographical attributes as are made available are collectively termed herein "geographical results". In one implementation, the geo-coding connector 146 contacts Microsoft® MapPoint Web Services via XML web service and passes the spatial reference point information to the XML web service according to its well-known documented syntax and semantics. In response, the Microsoft® MapPoint Web Services® XML web service provides geographical results that include a geographical reference point (i.e. latitude & longitude), and an indication of success or failure and if failure some diagnostic information as to cause of failure. The geo-coding connector 146 then passes the received geographical results back to the geo-coding interface 144 (Step 436). In one implementation, the geo-coding connector 146 composes such results into its return object (a term of art commonly known to skilled practitioners) and transfers the return object and programmatic control back to the geo-coding interface 144. Next, the geo-coding interface 144 posts the geographical results back to the collaboration tool server 120 along with a reference to the associated information store (e.g., 128a) and identification of the data container or row (e.g., 24On) (Step 438). In one implementation, the geo-coding interface 144 uses the identity information obtained in step 414 or 424 to identify the instant data row (e.g. 240a). The collaboration tool server 120 then locates the identified information store 128a and data row and updates the data row as requested (Step 440) before ending processing of process 400. In one implementation, the collaboration tool server 120 requests the collaboration tool engine to update the geographical reference point 254 data of the identified data container or row of the identified information store (e.g., 128a). Turning to Fig. 5, a flow diagram is shown illustrating a process 500 performed by the collaboration tool server 120 working cooperatively with the web-based GIS viewer 124 to allow a user to selectively view a visualization of one or more GIS-enabled information stores 128a-128n in accordance with the present invention. Because this is a cooperative effort between two largely independent components of the collaboration tool system 102, there is a respective start point, labeled Start 1 & Start 2 on Fig. 5, associated with the respective component (the collaboration tool server 120 and the web-based GIS viewer 124 for performing the process 500. Although the process 500 is described herein as beginning with Start 1 based on an end-user access of the collaboration tool server 120, an end-user operating on a client computer 50 could alternatively first access the web-based GIS viewer 126 to cause the process 500 to begin at Start 2. To provide clarity to this aspect of the present invention associated with the process 500, a representative example involving an emergency management application dealing with evacuating a nursing home in the face of rising floodwaters is presented in context with the description of the process 500. Commencing from point Start 1, the collaboration tool server 120 receives a request to display GIS-enabled business data (e.g. 240a) stored in an information store (e.g. 128a) and provides response. (Step 502). As a representative example, consider an information store containing data rows corresponding to nursing homes and containing business data consisting of management contact information and resident census information as well as spatial reference information ( i.e. street address) and geographical reference information (i.e. latitude / longitude). In one implementation, the request is in the form of a web page posted from the user's browser 50 which instructs the collaboration tool server 120 to generate a tabular report of data container(s) or row(s) (e.g. 240a-240n) from an information store (e.g. 128a). hi response, the collaboration tool server 120 retrieves the requested data from its information store database 130 and generates a web page containing the data formatted in a tabular row and column fashion. Because the information store (e.g. 128a) is GIS-enabled with spatial reference point 252 data and corresponding reference point 254 data as described herein in addition to business data 250, the information store includes a geographical reference URL which corresponds to a command to the web-based GIS viewer 124 to display a visualization of a certain configured type and containing a geographical reference point corresponding to this business record 250. The tabular data on the web page includes the geographical reference URL as a selectable (i.e., navigable) link. Having assembled the web page as described above, the collaboration tool server 120 transmits the page to the user's browser 50 for display. 1 Next, the user views the GIS-Enabled business data containers or rows 240a-240n from the collaboration tool server 120 using traditional textual record-oriented reporting or editing user interface (not show in figures) (Step 504). As a representative example, assume the user (an emergency management dispatcher) has requested the information associated with GIS-enabled business data containers or rows 240a-240n because of a call from one nursing home reporting rising floodwaters threatening their facility & residents. The user observes via the browser 50 the data 240a-240n as provided by the collaboration tool sever 120 sorted by name & locates the affected home. Next, the user observes a business data row of interest (e.g., 24On) and requests the collaboration tool server 120 provide a visualization of this business data row of interest in association with related geographical reference points (e.g., point 254) (Step 506). As a representative example, our user recognizes the nursing home by name and wants to view it, and other nearby nursing homes, on a map (not shown in figures) created by the web-based GIS viewer 124 including topographical data and floodwater contour predictions. (Step 522). As a representative example, the web-based GIS viewer 124 receives the request generated by our user in step 508. In one implementation, the web-based GIS viewer 124 examines the request to determine the visualization type desired and the geographical reference point(s) 254 desired. If such parameters are missing from the request, the web- based GIS viewer 124 provides a default visualization (for example, a 2D map) lacking any visible geographical reference points and centered on a operatively configurable geographical location (typically the center of the installation's region of interest or responsibility). Next, the web-based GIS Viewer 124 contacts the collaboration tool server 120 to perform SDD based on the end-user's credentials (e.g., corresponding to the end-user's profile stored on the data processing system in accordance with known authentication techniques). The web-based GIS viewer retrieves a roster of available information stores or equivalently, GIS layers (Step 524). As a representative example, the web-based GIS viewer contacts the collaboration tool server 120's SDD component which reports that this user has available GIS-Enabled information stores 128 including schools, school bus storage yards, and current up-to-the-minute locations of city transit busses. In one implementation, the communication from the web-based GIS viewer 124 to the collaboration tool server 120 is accomplished via XML web service and passing along the end-user access credentials. The SDD component 192 web service queries the collaboration engine 126 via its well-known documented interfaces to determine the topology of the installation as of that moment and the GIS-enabled information stores available at each point in the topology to that user in accordance with the user's credentials. The collaboration engine 126 consults the secondary storage 110 and information store database(s) 130 to determine the appropriate response(s) to the SDD 192 web service queries. The collaboration tool server 120's SDD component 192 web service returns this information to the web-based GIS viewer's SDD interface 194 as an XML document. One of ordinary skill in the art will appreciate that other interface mechanisms are usable as well. Next, the web-based GIS viewer 124 consults configured connections to secure data system(s) 160 to determine available GIS layers (Step 526). As a representative example, the web-based GIS viewer 124 contacts a secure data system 160 belonging to the emergency management agency which reports that topographical contours are available, as are real-time flood water depth maps generated from data provided by radio-based sensors located on traffic lights throughout the area. In one implementation, the web-based GIS viewer 124 communicates with ESRI ArcIMS secure data system servers via network 51 using the ArcIMS proprietary communications protocol to authenticate and determine available layers and map overlays. One of ordinary skill in the art can readily appreciate additional or differing functional equivalent communication technique or protocols may be employed based on the particular secure data system 160. Next, the web-based GIS viewer 124 sends the requested visualization and control user interface with complete layer roster back to the requesting browser 50 (Step 528). As a representative example, the web-based GIS viewer 124 provides the requested 2D map with topographical data centered on and displaying a reference point icon representing the nursing home. The response to the browser also includes a control user interface containing the roster of available collaboration tool server 120-based layers (schools, school bus yards and city busses) and the secure data system 160-based layers (the topographical contour information and the flood water depth information). In one implementation, the layers are depicted as a tree structure of layer names, each with check boxes to enable end-user selection. The visualization tool portion of the web-based GIS viewer 124 is capable of several types of visualization, including without limitation, 2D maps, 3D maps, and aerial photographs selected by dropdown controls. Additional capabilities of the control user interface include without limitation, selecting specific points by click, dragging a lasso around one or more points to select multiple points, zoom , pan, & rotate. In one implementation, the actual visualization is generated ("rendered" being the term of art) by Microsoft® Virtual Earth®, an XML web-service AJAX based technology. One ordinarily skilled in the art can readily appreciate additional or differing functionally similar possibilities which may exist when the present invention is applied to differing visualization rendering systems of similar purpose from other vendors. Upon receipt, the requesting browser 50 displays the visualization and control user interface as provided by the web-based GIS viewer for the user. Next, the end-user identifies desired layer(s), base maρ(s), and visualization type to the web-based GIS viewer 124. The end-user's browser 50 sends the identified information in a request back to the web-based GIS viewer (Step 530). As a representative example, the user may decide to see flood water levels and current city bus locations and nursing home locations overlaid together on the map. In one implementation, the user selects the desired map via a dropdown menu, checks the boxes for the desired layers, and clicks the "update map" button (not shown in figures). The browser 50 processes the inputs and generates a corresponding command which is transmitted to the web-based GIS viewer 124 via XML / AJAX technology. Next, the web-based GIS viewer 124 performs the SPR subprocess to retrieve available points from user-selected layers regardless of source and renders the same into a visualization for display on the user's browser 50 (Step 532). As a representative example, the web-based GIS viewer receives the request generated by the user in step 530 and responds by contacting the collaboration server tool 120 to retrieve the requested layers (nursing homes & city busses) and contacting the secure data system 160 to retrieve the other requested layers (topographical and flood waters). In each case, the user's credentials may also be passed so that the respective secure data systems 160 is able to determine whether and which, data points to expose. In one implementation, the communication from the web-based GIS viewer 124 to the collaboration tool server 120 is accomplished via XML web service to authenticate & retrieve data while the communication to secure data system 160 (i.e. ESRI ArcIMS servers) uses the ArcIMS proprietary communications protocol. Following SPR data retrieval, the web-based GIS viewer 124 adds the requested points, contours or other graphical features (e.g. floodwater coloring based on depth) to the visualization type requested by the user. The geographical reference points 254 are rendered on the visualization using various representative icons and colors. Each geographical reference point 254 is also provided on the visualization with certain business attributes such as name or status. Each geographical reference point 254 is also provided by the web-based GIS viewer 124 on the visualization with a navigable URL to cause the browser 50 to request the collaboration server 120 to provide detailed information on that point and/or a tabular report of related points from the corresponding information store 128. Next, the web-based GIS viewer 124 sends the updated visualization and control user interface to the user's browser 50 (Step 534). As a representative example, the web-based GIS viewer transmits the result of step 532 back to the browser 50 as discussed above. In one implementation, this is accomplished via standard internet communications protocols, including TCP/IP, HTTP, and SSL. The browser 50 displays the visualization and the control user interface provided by the web-based GIS viewer 124, allowing the user to pan, zoom, extract, and use other known mapping functionality options (Step 552). As a representative example, the web-based GIS viewer 124 displays the result of step 532 on the user's browser 50. In one implementation, the browser uses a mixture of static HTML, DHTML and AJAX technologies to achieve this result. After providing the visualization to the browser 50 in step 552, the process 500 can take several courses based on user input via the browser 50. The user is free to end the GIS visualization process 500, perhaps to return their browser 50 to the collaboration tool server 120 to access other unrelated data. This is depicted in Fig 5. by the Stop oval. The visualization process may take two other courses of action as discussed below. activate point extract or other options of the control user interface requiring an updated visualization. The browser sends detailed request back to GIS Viewer 124 (step 562). As a representative example, the user decides to add the school bus yard layer to the display and change the visualization type to aerial photograph. In one implementation, the user selects the desired map via a dropdown of the control user interface provided by the Web viewer 124, checks the boxes for the desired layers, and clicks the "update map" button. The browser processes the inputs and generates a corresponding command which is transmitted to the web-based GIS viewer 124 via XML / AJAX technology. After performing step 562, processing continues at step 532, for example, to iterate several times through steps 532, 534, 552, and 562 in turn as the user adjusts the information displayed to accomplish their business purpose. In response to performing step 574, processing continues at step 502 where operation reverts back to the collaboration tool server 120. As will be apparent to a discerning reader, the same cycle can be started at point Start 2 of Fig 5 and carried around and back to Start 2. Each of the components 120, 122, and 124 of the collaboration tool system 102 may be installed in the same physical computer system or on separate co-located or geographically-dispersed computer systems for redundancy, survivability, and/or load sharing. Additional secure computer systems 60 (one or many) may be co-located, or geographically dispersed with connections provided by any relevant secure connection technology. Auxiliary business data sources 122 (one or many) may also be co-located, dispersed, or both, and connected with any relevant technology. External Applications 70 (one or many) may also be co-located, dispersed, or both, and connected with any relevant technology. The collaboration tool server 120, the geo-coding driver 122, and the web-based GIS viewer 124 each may comprise or may be included in one or more code sections containing instructions for performing respective operations as discussed herein, which may be accessed and run by the CPU 104. Although the collaboration tool server 120, the geo-coding driver 122, and the web-based GIS viewer 124 and other programs are described as being implemented as software, the present invention may be implemented as a combination of hardware and software or hardware alone (such as in a ASIC device). Also, one of skill in the art will appreciate programs may comprise or may be included in a data processing device, which may be a separate server, communicating with the collaboration tool system 102 via the network 51. In addition, although aspects of one implementation shown in Fig. IA are depicted as being stored in memory, one skilled in the art will appreciate that all or part of systems and methods consistent with the present invention may be stored on or read from other computer- readable media, such as secondary storage devices, like hard disks, floppy disks, and CD- ROM; a carrier wave received from a network such as the Internet; or other forms of ROM or RAM either currently known or later developed. Further, although specific components of data processing system 100 have been described, a data processing system suitable for use • with methods, systems, and articles of manufacture consistent with the present invention may contain additional or different components. 1. A method in a data processing system for extending the business data associated with a network-based user collaboration tool engine to include spatial reference information for collaborative visualization, the collaboration tool engine having one or more business data type schemas for generating an information store container associated with a respective business data element, the method comprising: providing a control specification to the collaboration tool engine, the control specification having one or more information store type schemas, each information store type schema identifying a corresponding spatial data type schema in association with a geographical data type schema; receiving a request to generate a new information store; generating, via the collaboration tool engine, the new information store based on the one or more business type schemas; providing to the requestor a roster identifying the one or more information store type schemas; receiving a second request to add the one or more information store type schemas to the new information store; in response to the second request, generating, via the collaboration tool engine, a spatial reference point container in the new information store for each of the one or more information store type schemas based on the spatial data type schema identified by the respective information store type schema. 2. A method of claim 1 , wherein each information store type schema identifies the corresponding spatial data type schema in association with a geographical data type schema. 3. A method of claim 2, further comprising, in response to the second request, generating, via the collaboration tool engine, a geographical reference point container in the new information store for each of the one or more information store type schemas based on the geographical data type schema identified by the respective information store type schema. 4. A method of claim 3, further comprising: monitoring the collaboration tool engine to identify when one of the spatial reference point containers has been updated; and when the one spatial reference point container has been updated, obtaining a new geographical reference point from a geo-coding source system based on the updated one spatial reference point and storing the new geographical reference point in the geographical reference point container associated with the one spatial reference point container. 5. A method of claim 3, wherein the collaboration tool engine has an event subscription interface, the method further comprising: assigning an event within the event subscription interface to an event trigger module associated with the new information store, wherein the event identifies when the collaboration tool engine has updated one of the spatial point containers of the new information store and an occurrence of the event is communicated to the event trigger module; when an occurrence of the event is communicated to the event trigger module, obtaining a new geographical reference point from a geo-coding source system based on the updated one spatial reference point and storing the new geographical reference point in the geographical reference point container associated with the one spatial reference point container. 6. A method in a data processing system for extending the business data associated with a network-based user collaboration tool engine to include spatial reference information for collaborative visualization, the collaboration tool engine having one or more business data type schema for generating an information store container associated with a respective business data element, the method comprising: providing a control specification to the collaboration tool engine, the control specification having one or more information store type schemas, each information store type schema including a spatial data type schema; determining whether the collaboration tool engine has generated or exposed a business data information store record or document having a spatial reference point in accordance with the control specification; when it is determined that the collaboration tool has generated or exposed a business data information store record or document having a spatial reference point in accordance with the control specification, generating a geographical reference point corresponding to the spatial reference point; and storing the geographical reference point with the spatial reference point. 7. A method of claim 6, further comprising receiving a user request to view the business data information store record or document; and in response to the user request, displaying a map or other visualization to reflect the geographical reference point. 8. A method of claim 7, wherein the collaboration tool engine has a plurality of business data type schemas, the method further comprising: prompting the collaboration tool engine to generate the business data information store based on at least one of the business data type schemas and the one or more information store type schemas. 9. A method of claim 7, wherein the collaboration tool engine has a plurality of business data type schemas, further comprising: displaying a roster identifying each of the business data type schemas and the information store type schemas; receiving a request identifying one of the business data type schemas and one of the information store type schemas; and prompting the collaboration tool engine to generate the business data information store to have a first container based on the one business data type schema identified in the request and a second container based on the spatial data type schema of the one information store type schema identified in the request. schema identifying a corresponding spatial data type schema in association with a geographical data type schema; the collaboration tool system further including a memory having a collaboration tool server and a geo-code driver operatively connected to the collaboration tool server and operatively configured to communicate with a geo-coding source, the collaboration tool server being operatively configured to control a collaboration tool engine based on the control specification, receive a request to generate a new information store, generate the new information store based on the one or more business type schemas via the collaboration tool engine, provide to the requestor a roster identifying the one or more information store type schemas, receive a second request to add the one or more information store type schemas to the new information store, and, in response to the second request, generate a spatial reference point container in the new information store for each of the one or more information store type schemas based on the spatial data type schema identified by the respective information store type schema; the collaboration tool system further including a processor to run the collaboration tool server and the geo-code driver. 11. A data processing system of claim 10, wherein each information store type schema identifies the corresponding spatial data type schema in association with a geographical data type schema. 12. A data processing system of claim 11, wherein the collaboration tool server is further operatively configured to, in response to the second request, generate a geographical reference point container in the new information store for each of the one or more information store type schemas based on the geographical data type schema identified by the respective information store type schema. 13. A data processing system of claim 12, wherein the geo-code driver is operatively configured to monitor the collaboration tool engine to identify when one of the spatial reference point containers has been updated; and when the one spatial reference point container has been updated, obtain a new geographical reference point from a geo-coding source system based on the updated one spatial reference point and storing the new geographical reference point in the geographical reference point container associated with the one spatial reference point container. 14. A data processing system of claim 12, wherein the collaboration tool engine has an event subscription interface and the collaboration tool server has an event trigger module associated with the new information store and assigned an event within the event subscription interface, the event identifying when the collaboration tool engine has updated one of the spatial point containers of the new information store, the event trigger module being operatively configured to receive an occurrence of the event from the event subscription interface and, , in response to receiving an occurrence of the event, to obtain a new geographical reference point from the geo -coding source system based on the updated one spatial reference point and storing the new geographical reference point in the geographical reference point container associated with the one spatial reference point container.
2019-04-25T18:23:25Z
https://patents.google.com/patent/WO2007081794A2/en
of a member of the delegation concerned within one week of the date of publication to the Chief of the Verbatim Reporting Section, Room C-178, and incorporated in a copy of the record. Corrections will be issued after the end of the session. Tuesday, 1 March 1994, 6 p.m. The President (interpretation from French): As this is the first meeting of the Security Council for the month of March, I should like to take this opportunity to pay tribute, on behalf of the Council, to His Excellency Mr. Roble Olhaye, Permanent Representative of Djibouti to the United Nations, for his service as President of the Security Council for the month of February 1994. I am sure I speak for all members of the Security Council in expressing deep appreciation to Ambassador Olhaye for the great diplomatic skill and unfailing courtesy with which he conducted the Council's business last month. The President (interpretation from French): In accordance with the decisions taken at the 3340th meeting, I invite the representative of Israel to take a place at the Council table; I invite the Permanent Observer of Palestine to take a place at the Council table; I invite the representatives of Afghanistan, Algeria, Egypt, Greece, Indonesia, the Islamic Republic of Iran, Jordan, Kuwait, Lebanon, the Libyan Arab Jamahiriya, Malaysia, Qatar, Sudan, the Syrian Arab Republic, Tunisia, Turkey and the United Arab Emirates to take the places reserved for them at the side of the Council Chamber. At the invitation of the President, Mr. Yaacobi (Israel) took a place at the Council table; Mr. Al-Kidwa (Palestine) took a place at the Council table; Mr. Farhadi (Afghanistan), Mr. Lamamra (Algeria), Mr. Elaraby (Egypt), Mr. Exarchos (Greece), Mr. Nasier (Indonesia), Mr. Khoshroo (Islamic Republic of Iran), Mr. Bataineh (Jordan), Mr. Abulhasan (Kuwait), Mr. Makkawi (Lebanon), Mr. Elhouderi (Libyan Arab Jamahiriya), Mr. Razali (Malaysia), Mr. Al-Ni'mah (Qatar), Mr. Yassin (Sudan), Mr. Awad (Syrian Arab Republic), Mr. Abdellah (Tunisia), Mr. Batu (Turkey) and Mr. Samhan (United Arab Emirates) took the places reserved for them at the side of the Council Chamber. The President (interpretation from French): I should like to inform the Council that I have received letters from the representatives of Bahrain, Bangladesh, Japan, Mauritania and Ukraine in which they request to be invited to participate in the discussion of the item on the Council's agenda. In accordance with the usual practice, I propose, with the consent of the Council, to invite those representatives to participate in the discussion without the right to vote, in accordance with the relevant provisions of the Charter and rule 37 of the Council's provisional rules of procedure. At the invitation of the President, Mr. Al-Faihani (Bahrain), Mr. Majid (Bangladesh), Mr. Motomura (Japan), Mr. Ould Mohamed Mahmoud (Mauritania) and Mr. Khandogy (Ukraine) took the places reserved for them at the side of the Council Chamber. "In my capacity as Chairman of the Committee on the Exercise of the Inalienable Rights of the Palestinian People, I have the honour to request that I be invited to participate in the debate on the agenda item 'The situation in the occupied Arab territories', under rule 39 of the provisional rules of procedure of the Security Council." The Security Council will now resume its consideration of the item on the agenda. Members of the Council have before them document S/1994/231, which contains the text of a letter dated 28 February 1994 from the Permanent Representative of Greece to the United Nations addressed to the Secretary-General, transmitting the text of a declaration of the European Union. Members of the Council have also received photocopies of a letter dated 28 February 1994 from the Permanent Representative of the Sudan to the United Nations addressed to the President of the Security Council, which will be issued as document S/1994/236. The first speaker is the representative of Afghanistan. I invite him to take a place at the Council table and to make his statement. Mr. Farhadi (Afghanistan) (interpretation from French): As the first speaker as you begin your presidency of the Security Council on this first day of March, I should also like, Sir, to be the first to congratulate you. The Council now has to decide on some most serious and complex issues, and we have every confidence in your abilities as an experienced diplomat and in your wealth of knowledge of these issues to enable you to guide the work of the Council to a successful conclusion. With the deepest pain and great indignation, the entire world has condemned the massacre committed in Al-Khalil, or Hebron, before sunrise on Friday, 25 February, the fifteenth day of Ramadan, the month of fasting. We are here first and foremost to raise our voices in echo of the voices of vast numbers of human beings. It should be clearly understood that in occupied Palestinian territory those armed by the occupier are shooting not only adolescents who throw stones at the jeeps of the occupying army, but also those who fast, as Abraham and Moses fasted, and those who prostrate themselves before God, the common God of the three Abrahamic religions. "For God did take Abraham for a friend." The same verse states that every believer should follow the religion of Abraham - "millat Ibrahim" - the religion of a hanif, that is, of an upright person. For Muslims, Abraham is the spiritual patriarch of all the sincere believers of mankind - "al-nas" - as is stated in the last verse of Sura 22 of the Holy Book of Islam, a Book where the name of Abraham is mentioned 70 times. This is why the haram - the precinct of this place of Islamic pilgrimage in the city of Al-Khalil, the precinct that was desecrated at dawn on Friday by a terrorist with the infidel's heart in a massacre of believers who had already begun their fast and were prostrating themselves before the Lord of Abraham, who is also their Lord - is the most sacred site in Palestinian territory, after, obviously, the Haram al-Sharif, which is the holy precinct of the city of Al-Quds, or Jerusalem. It might be said that the points I have just made are religious points. But even lay people, whose way of life is prevalent here at the United Nations, will find in them certain socio-political facts of great importance. Numerically speaking, the abominable massacre of 25 February set a record. Historically speaking, this is not the first time we have condemned such an occurrence. Indeed, three and a half years ago, on 8 October 1990, in the Haram al-Sharif, this holy sanctuary of the city of Al-Quds, or Jerusalem, violence committed by the Israeli security forces left more than 20 dead and more than 150 wounded among Palestinian civilians who were in the act of praying. Security Council resolution 672 (1990) called upon Israel, the occupying Power, to abide scrupulously by its legal obligations and responsibilities upon under the Fourth Geneva Convention, applicable to all the territories occupied by Israel since 1967 - including, clearly, Jerusalem. However, for a quarter of a century now, Israeli propaganda has generally tried to suggest to the inhabitants of settlements that they were living in territory that belonged to them. This unofficial attitude of the Israeli authorities prepared the ground for ideological indoctrination that runs counter to the whole purpose of the peace process now under way. Since the Palestinian civil population has been living in the same territories for centuries, those preaching hate for the peace process were able to gain tremendous influence, particularly in the settlements, which were set up under military occupation. It is therefore important, first and foremost, for anyone trying to promote the continuation of the peace process to disarm ideologically the fundamentalist extremists in these settlements and to convince them that the land on which they have been installing themselves since 1967 is occupied temporarily and unjustly. It belongs, in fact, to the Palestinians, who have been living there for many centuries. Anyone who strives for peace while insisting on maintaining settlements in territory occupied militarily, and who supports the settlers and their actions by means of armed force will never achieve his goals of peace. This is the lesson that we learned from the end of colonialism in the twentieth century, and it is also the lesson of thousands of years of the history of nations. In our century, in this age of automatic weapons, instruments that make it possible to kill a whole crowd of human beings in a few seconds, it is important to start by disarming these settlers, whether or not they be psychopaths or lunatics. For the moment, the massacre at dawn on Friday has profoundly harmed not only the Palestinians but also the credibility of the peace process. For the future, these settlements are centres for terrorists and have been rightly called time bombs that could destroy the entire peace effort. The massacre of 25 February has demonstrated irrefutably that the peace process is totally incompatible with the actions of the armed occupation forces of armed settlers, whether or not they wear the uniform of Israeli reservists. Let us recall that this point has already been made. In paragraph 4 of resolution 681 (1990), of 20 December 1990, the Israeli Government is called upon to accept the de jure applicability of the Fourth Geneva Convention of 1949 to all the territories occupied by Israel since 1967 and to abide scrupulously by the provisions of that Convention. potential for sabotage of the entire Palestinian-Israeli peace process. The acts of these settlers in particular and the occupation security forces in general are contrary to the principles clearly enunciated by the current Government of Israel, and hence to the Oslo negotiations of August 1993 and the Washington Declaration of Principles of 13 September 1993. The need to protect the Palestinian civilians in the occupied Palestinian territories, including Al-Quds (Jerusalem), not only is based on international law but is also required and made imperative by a practical and concrete situation which the Government of Israel should recognize as being of prime importance. What is the solution to all this? Three consecutive stages can be clearly envisaged. First, the extremists, the fundamentalists, among the settlers must be disarmed immediately. Secondly, the rest of the settlers must be disarmed immediately thereafter. Thirdly - that is, in a subsequent stage - there must be an accelerated dismantling of the Israeli settlements in all the occupied Palestinian territories, including Al-Quds (Jerusalem). This might, of course, require the building of housing within Israel to which these settlers can be transferred. The ways and means of carrying out such a plan would be part of the peace negotiations. At the same time, international protection of the Palestinian civilian population is clearly necessary. The duration of such protection will be a function of the success of the peace process - the quicker that process is completed, the shorter will be the time needed for such protection. This international protection would be a positive factor, militating in favour of satisfactory progress in the peace process. In conclusion, an overwhelming consequence for the prospects for peace would seem to be the need to revise fundamentally the agenda for the ongoing peace negotiations - particularly to reorder the priorities radically. The most urgent priority now is clearly the need to afford protection to the Palestinian civilians in the occupied territories. All the parties involved, including the United Nations, must deal with this. It is a new chapter in the book concerning respect for the inalienable rights of the Palestinian people - first and foremost their right to live and to survive, and then their right to independence. The President (interpretation from French): I thank the representative of Afghanistan for the kind words he addressed to me. Mr. Samhan (United Arab Emirates) (interpretation from Arabic): I should like to begin by expressing to you, Sir, the congratulations of the delegation of the United Arab Emirates on your assumption of the presidency of the Security Council for this month. We are certain that your diplomatic skills and your experience will contribute to the Council's achievement of positive results. I have the honour also of expressing our appreciation and paying a tribute to your predecessor, the Permanent Representative of the sister country of Djibouti, for the efficient and able way in which he presided over the Council's deliberations last month. I wish also to express to you, Mr. President, and to the other members of the Council our thanks and appreciation for having convened the Council and for giving us the opportunity of addressing it. We were shocked, indeed stunned, by the magnitude of the tragedy that befell our Palestinian brothers as a result of the heinous carnage perpetrated by an evil, criminal Israeli hand against persons praying at Al-Haram Al-Ibrahimi at dawn on Friday, 25 February 1994. The people and the Government of the United Arab Emirates have condemned in the strongest possible terms this criminal massacre that took a toll of more than 50 martyrs and resulted in the wounding of hundreds of other Palestinians. Since this massacre took place, many Palestinians have been killed or wounded by Israeli military forces. Their only crime was that they were expressing their indignation at this heinous act perpetrated against their kith and kin in Al-Khalil. Their only crime was that they were demanding that an end be put to the Israeli occupation of their homeland and that they be given an opportunity to exercise their inalienable, legitimate national rights, like all the other peoples of the world that have attained independence and rid themselves of the yoke of foreign occupation. The criminal transgression against the sacred Al-Haram Al-Ibrahimi and the murder of the Muslims praying there was a heinous crime, totally incompatible with the sanctity of the Holy Places which are held in such high esteem by all the divinely revealed religions. It was also a flagrant violation of international humanitarian law. This bloody carnage cannot be regarded as an isolated event. It is linked with two similar events: the arson at the Holy Al Aqsa Mosque in El-Quds (Jerusalem) in 1969, and the killing of 20 persons and the wounding of hundreds of others at the Haram Al-Sharif in Jerusalem in 1990 by Israeli military forces, without any regard or respect for the fact that the Holy Places are sacred to millions of Muslims in the world. Israel's record since its occupation of the Palestinian territories is replete with examples of the desecration of holy shrines. Anyone who analyzes these three crimes against Muslim places of worship will conclude that Israel is either unable or unwilling to provide adequate protection for the holy shrines and for those worshipping in them. Whatever the reason may be, that position is untenable. The Israeli authorities take no measures to prevent extremist Israeli groups from perpetrating their terrorist crimes and acts of aggression against the Palestinian people. To the contrary, the Israeli authorities permit those groups and other settlers to carry weapons, on the pretext of self-defence. But all the evidence indicates that those weapons have been used only to attack unarmed Palestinian civilians. Israel is attempting to shirk its responsibility for this carnage, on the pretext that the perpetrator was deranged. We reject that argument, both in form and in content. It was the Israeli authorities themselves who permitted him to carry weapons; it was they who permitted him to enter the mosque during prayers; they took no swift, effective action to stop the carnage. That proves that the Israeli authorities condone the crimes perpetrated by the settlers against the Palestinian people. Moreover, preliminary reports indicate that some Israeli soldiers actually participated in the murders. The Israeli authorities confiscated Palestinian lands and have permitted the establishment of settlements on those lands. They have transferred tens of thousands of settlers from all over the world to live there, in total contravention of the provisions of the Fourth Geneva Convention of 1949. That policy is in flagrant violation of United Nations resolutions declaring the settlements illegal, and of the norms of international conduct, the United Nations Charter and the rules of international law. In the face of this situation in the occupied Palestinian territories, which is deteriorating owing to Israeli practices against the Palestinian civilian population, the international community, and in particular the Security Council, cannot continue to be a helpless spectator content with declarations of condemnation and resolutions of denunciation. It has the duty to take decisive steps to guarantee the security and safety of the Palestinian people. Hence, the Government of the United Arab Emirates calls upon the international community, and especially the Security Council, fully to shoulder its responsibility under the Charter and under the rules and norms of international law by adopting an unambiguous decision to protect the Palestinian people in the occupied Palestinian territories, including Al-Quds Al-Sharif. It must also seek the implementation of Security Council resolution 681 (1990) and appoint an international commission to investigate the circumstances under which the carnage was perpetrated in Al-Haram Al-Ibrahimi in Al-Khalil, taking whatever steps are required to enable that commission to discharge its mandate. Recent statements by Israel regarding its intention to disarm some settlers are not enough to prevent the repetition of such crimes and carnage against the Palestinian people. We therefore consider it important -indeed, necessary - that all settlers without exception be disarmed. That measure must be implemented in the context of a policy leading to the dismantling of existing settlements and the return of the settlers to Israel, in conformity with Security Council resolution 465 (1980), paragraph 6 of which calls for the dismantling of existing settlements. That is the right approach for the Israeli authorities to take if they are indeed seriously interested in achieving a just, comprehensive, peaceful settlement of the Arab-Israeli conflict and the question of Palestine. the Council table and to make his statement. Mr. Elhouderi (Libyan Arab Jamahiriya) (interpretation from Arabic): Let me begin, Sir, by saying how pleased I am to see you presiding over the work of the Security Council for this month, which coincides with the holy month of Ramadan. It is a time when the holy places are filled and Muslims - their faces glowing with a sense of trust and serenity - seek, through prayer, fasting and good works, to come closer to God. I am sure, Sir, that your ability and diplomatic skill will enable the Council to do a remarkable job in the face of these tragic events. I wish also to congratulate your predecessor, Ambassador Olhaye of Djibouti, who guided the work of the Council last month with great skill. The events the Council is considering today constitute a crime of collective extermination that in a brief moment resulted in 60 deaths and some 300 injuries among Palestinians. A Zionist gang led by an extremist American Jew committed this crime. This gang is motivated by the hatred that infects their hearts so deeply that they are blind to the holiness of a House of God. This gang preyed upon the trust and serenity of the worshippers in order to overwhelm them with gunfire, and thus achieved their premeditated criminal aims. Following the crime, the malevolent masters of propaganda announced that the crime had been committed by a single criminal. That constitutes an attempt to harbour criminals and save them from punishment, and to conceal the plans to terrorize and exterminate the Palestinian people. This propaganda reiterates that the criminal was a madman; we are accustomed to hearing such claims following each crime. But the evidence shows that the perpetrators are part of a gang of the followers of Rabbi Meir Kahane, a terrorist, extremist Jew. This gang has a long history of attacking places of worship. They were the ones who set fire to the holy mosque of Jerusalem in an attempt to blow it up. They are also the ones who attacked Al-Haram Al-Ibrahimi several times in the past, opening fire on worshippers and stealing its historic wealth. If they are indeed madmen they should be kept in hospitals to prevent them from bringing about such suffering. Such horrible carnage could not have been brought about without premeditation and without protection from the Zionist entity in occupied Palestine, because we know that Al-Haram Al-Ibrahimi is under the protection of the Zionist army. Where was that army at the time this barbarous act was committed? The Zionist army did not remain idle: it opened fire on the Palestinians who had gathered in the courtyard of the local hospital in Hebron to give blood or to seek news of their loved ones. The Zionist entity could not commit such terrorist criminal acts without the moral and material support of the United States of America, which turns a blind eye to the Zionist entity's violations of human rights and, indeed, impedes efforts to bring it to heel. The United States is interested only in the continuation of the peace talks. This was a premeditated and organized criminal act, carried out with sophisticated weapons. It was a flagrant, violent act of aggression that typifies what the Palestinians have to endure every day. While the Zionist entity denounces these acts today, others remain silent and even join the funeral cortege of their victims. One may rightly wonder whether we are really on the road to peace, because, so far, that road leads only to the interests of the Zionist entity: To speak of peace and security is to speak of the peace and security enjoyed by that entity. But there is no security or peace for the Palestinian people. The Zionist entity, thanks to American support, is already enjoying the fruits of peace, even before peace has been established. There are agreements to provide American weapons. There are trade agreements to ensure the security of that State, whereas the Palestinian people only finds destruction, death and expulsion. The Arab peoples in general and the children with stones in particular cannot believe in such a process; they cannot accept a peace based on inequality and oppression. The Arab peoples will support a just peace, one which is aimed at freeing Palestinian and occupied Arab territories from occupation and extremism so that Muslims, Christians and Jews may live together in a democratic State, as is possible today in South Africa. That is the only solution likely to bring about a just peace, and not these false and shameful initiatives. The Security Council is today considering a flagrant act of aggression - a barbaric terrorist act which threatens international peace and security. The Security Council must shoulder its responsibilities with the same enthusiasm and determination it has shown in other cases which have been considered to be threats to international peace and security. The Council is therefore faced with a difficult test today. It has but two options open to it: to continue with the policy of double standards imposed by most of the permanent members, a policy which has brought about inequality and destroyed the Council's credibility or to shoulder its responsibility for safeguarding international peace and security by implementing the Charter of the United Nations. The peace of the Palestinian people and its security are seriously endangered. Hence the Security Council is in duty bound to adopt forthwith the following measures to guarantee to that people its right to live in peace and security: firstly, the organization of an international enquiry within the framework of the Security Council to ascertain the identities of the perpetrators of the crime; secondly, protection of the Palestinians from attacks by settlers; thirdly, confiscation of the settlers' weapons and withdrawal of the Zionist army from Palestinian towns and villages; fourthly, the dismantlement of Zionist settlements, which are in fact citadels of terrorism and provocation. If the Security Council fails to assume its responsibility with resolve and persists in its hypocritical policy, it will mean that Arab and Palestinian blood is cheap and does not merit the interest of the Council. The President (interpretation from French): I thank the representative of the Libyan Arab Jamahiriya for his kind words addressed to me. Mr. Makkawi (Lebanon): Sir, allow me first to extend, on behalf of my delegation, sincerest congratulations on your assumption of the presidency for this month. In Lebanon, we are all well aware of your commitment to the cause of peace in the Middle East and are confident that the work of the Council will be conducted most efficiently under your wise and proven leadership. I should also like to thank your predecessor, my brother and friend, the Permanent Representative of the Republic of Djibouti, for the exemplary manner in which he conducted the affairs of the Council during the month of February. observers to the occupied Palestinian territories. The whole world is stunned by the horrendous massacre at Al-Haram Al-Ibrahimi in Al-Khalil, Hebron. This is the first time a man has entered a place of worship during the holy month of Ramadan and gunned down hundreds of people prostrated in prayer. Clearly, the root of this tragedy is the continuation of the Israeli occupation, the insidious growth of settlements and the sustained influx of Jewish fundamentalists into the occupied territories. Peace in the Middle East cannot be achieved when Palestinians are allotted only 20 per cent of their historic land and when that 20 per cent is traumatized by 144 illegal settlements within the Palestinian homeland. All of us would like to believe that this was the act of a lone, crazed gunman, but the fact remains that Baruch Goldstein was the product of a society and ideology supported by former Israeli Governments and funded by ultra-Zionist groups. Some 300 dead and wounded are hardly the work of one man, a fact which speaks of complicity in the crime. The Government of Israel, despite its involvement in the Middle East peace process and its signing of the Declaration of Principles in Washington, has done nothing to discourage settlements or check extremist activity in the territories. Instead it has engaged in tactical manoeuvres aimed at postponing an Israeli troop withdrawal from Gaza and Jericho, such as its petty arguments over Palestinian troop sizes, border controls and Jericho's boundaries. Consequently, the 13 December deadline for withdrawal from Gaza and Jericho has come and gone with no substantive progress achieved. Now, with the news of the massacre, Palestinians are finding that hope is giving way to anger and desperation. Since 1967 Israel, the occupying Power, has failed to provide protection for the civilian population under its occupation, as it is its obligation to do under the Fourth Geneva Convention of 1949. Furthermore, with the growth of militant Judaism in the occupied lands, it is incumbent upon Israel to redouble its efforts to provide this protection. However, Israel's characteristic negligence and brutality is attested to by the fact that Israeli soldiers not only facilitated the massacre by allowing the gunman to enter the mosque, but were responsible for some of the deaths at the scene, as reported by the news media. And, as if this were not enough, soldiers have killed many demonstrators. Lebanon knows what it is to suffer under the brutal policies and practices of occupation, because we have endured the Israeli occupation of southern Lebanon for the past 16 years. Not a single day passes without the death of innocent civilians and the destruction of homes and property. None the less, the people of Lebanon are thoroughly convinced of the need for a just, lasting and comprehensive peace in the Middle East, which must encompass the Lebanese, Syrian, Jordanian and Palestinian tracks. However, peace cannot prevail so long as the occupation continues, the settlements remain and Jewish extremists continue to threaten the security of Palestinians in the West Bank, Gaza and East Jerusalem. As has been echoed time and time here, the need for a just and equitable solution to the Palestinian problem is the root of the broader Arab-Israeli conflict. Such a solution is the only hope for stability in the region and for the triumph of moderation. Peace in the Middle East cannot be achieved unless the Palestinians are granted their legitimate national rights and Israel withdraws from the Palestinian and Syrian territories in accordance with Security Council resolutions 242 (1967) and 338 (1973), and from southern Lebanon, in accordance with Security Council resolution 425 (1978). The reality of the need for a comprehensive peace is demonstrated by the fact that the effects of the massacre have been felt throughout the Arab and Muslim worlds. In Lebanon, Syria, Jordan and Egypt demonstrators rallied vehemently against the massacre, casting a dark shadow over the peace process. The Governments of Lebanon, Syria and Jordan responded by suspending talks with Israel which had been scheduled for this week. Also, in Lebanon, just two days after the massacre, a bomb exploded during a church service, killing nine innocent worshippers and wounding more than 60 people. This proves that the forces which have tried over the years to destabilize Lebanon and undermine our unity and national reconciliation are behind this bombing, which they carried out in order to divert attention away from the massacre at Al-Haram Al-Ibrahimi in Al-Khalil. However, they will not succeed. Lebanon is committed to the peace process and is strong enough to overcome this conspiracy. The Government and the people of Lebanon are standing firm to ensure that the tranquillity that has reigned in Lebanon for the last three years will continue. Furthermore, we will do everything in our power to bring the criminals to justice. At this critical juncture the Government of Israel must make a decision upon which the fate of the Middle East peace process will depend: does it want peace enough to dismantle the illegal settlements and bring back 130,000 settlers to Israel proper, or does it want to jettison the whole peace process and face what is quickly becoming a rising and insurmountable wave of religious extremism? Israel's condemnation of the massacre is not enough. Nor are its promises to arrest, disarm and restrict the movement of a handful of extremists. These empty gestures will do little to affect the situation on the ground unless all settlers are disarmed. After all, if settlers in the territories have no confidence in Israeli military protection, then why should the Palestinians? Either allow all civilians in the territories to carry arms, or disarm them all. I should like to conclude by saying that, in addition to the immediate disarming of settlers, what is urgently needed is for the Security Council to establish addressed to me. There are a number of speakers remaining on my list. In view of the lateness of the hour, I intend to adjourn the meeting now. With the concurrence of the members of the Council, the next meeting of the Security Council to continue consideration of the item on the agenda will be fixed in consultation with the members of the Council. Before adjourning the meeting, I should like, on behalf of the members of the Council, to express our warm thanks and appreciation to the Assistant Secretary-General, Mr. Benon Sevan, for his exemplary service to the Council. We wish him well as he takes up his new functions in the Organization. The meeting rose at 7.15 p.m.
2019-04-20T00:46:12Z
https://unispal.un.org/DPA/DPR/unispal.nsf/9a798adbf322aff38525617b006d88d7/4c32cc3adfbfc80e852560cb0072b5a4?OpenDocument
Levels of nitrogen oxides in the air are still falling across the US, but satellite measurements show the reduction has slowed down unexpectedly since 2011. Air pollution levels are falling in the US – but not as rapidly as the US Environmental Protection Agency thinks they are. The US has been reducing its emissions of nitrogen oxides (NOx) and carbon monoxide for about 50 years. The EPA keeps tabs on the progress, in part by calculating how technological improvements should change the emissions from vehicles and factories. But those calculations seem to be overestimating the progress being made, according to Zhe Jiang at the University of Science and Technology of China in Hefei. While at the National Center for Atmospheric Research in Boulder, Colorado, Jiang and his colleagues examined pollution data from satellites and ground-based sensors. The team found that NOx concentrations in the air dropped by 7 per cent each year between 2005 and 2009 – but by only 1.7 per cent each year between 2011 and 2015. That’s a 76 per cent slowdown. The EPA’s figures estimate the slowdown should have been just 16 per cent. “Our analysis suggests the EPA overestimated the effect of regulations for heavy duty diesel trucks, which will result in an underestimation of NOx emissions,” says Jiang. The discrepancy might also be due to a relative increase in contributions from off-road sources that are less strictly monitored, like farm equipment and lawn mowers. Last year, a study of data from 61 European cities showed the decline in roadside NO2 emissions since 2010 was larger than expected from government policy projections. Jiang says it’s not clear why certain emissions cuts have been underestimated in Europe and overestimated in the US. It is going to be one of the biggest projects ever undertaken in Antarctica. UK and US scientists will lead a five-year effort to examine the stability of the mighty Thwaites Glacier. This ice stream in the west of the continent is comparable in size to Britain. It is melting and is currently in rapid retreat, accounting for around 4% of global sea-level rise - an amount that has doubled since the mid-1990s. Researchers want to know if Thwaites could collapse. Were it to do so, its lost ice would push up the oceans by 80cm or more. Some computer models have suggested such an outcome is inevitable if conditions continue as they are - albeit on a timescale of centuries. But these simulations need to be anchored in many more real-world observations, which will now be acquired thanks to the joint initiative announced on Monday. "There is still a question in my view as to whether Thwaites has actually entered an irreversible retreat," said Prof David Vaughan, the director of science at the British Antarctic Survey. "It assumes the melt rates we see today continue into the future and that's not guaranteed. Thwaites is clearly on the verge of an irreversible retreat, but to sure we need 10 years more data," he told BBC News. The UK's Natural Environment Research Council and the US National Science Foundation are going to deploy about 100 scientists to Thwaites on a series of expeditions. The International Thwaites Glacier Collaboration (ITGC) is the two nations' biggest cooperative venture on the White Continent for more than 70 years - since the end of a mapping project on the Antarctic Peninsula in the late 1940s. "Donald Trump is not happy about the price of oil," said Jordan Weissman at Slate. The president recently chided the Organization of the Petroleum Exporting Countries, suggesting the cartel was manipulating global oil supplies in order to drive up prices, which this week briefly topped $75 a barrel, the highest in more than three years. "Looks like OPEC is at it again," Trump tweeted. "Oil prices are artificially Very High! No good and will not be accepted!" The cost of oil is up roughly 46 percent over last year, and with demand climbing, drivers have seen prices at the pump also soar to three-year highs. The last time oil was north of $70 a barrel, prices "were in the middle of a steep collapse," said Stephanie Yang and Alison Sider at The Wall Street Journal. It was 2014, and the U.S. shale boom and the resumption of drilling in Libya had resulted in a global glut of crude, causing oil prices to crater, eventually to just $26 a barrel. For two years, OPEC countries responded by pumping frantically, hoping to drive U.S. shale operators out of business. But in 2016, they "reversed course" and enlisted other petrostates, such as Russia, to agree to major production cuts. Over time, the cartel successfully rolled back production by more than 1.5 million barrels a day, eliminating the global glut that had kept prices low. Australia has pledged A$500 million (£275m; $379m) to protect the World Heritage-listed Great Barrier Reef. In recent years, the reef has lost 30% of its coral due to bleaching linked to rising sea temperatures and damage from crown-of-thorns starfish. The funding will be used to reduce the runoff of agricultural pesticides and improve water quality. Some of the money will be used to help farmers near the reef modify their practices. Threats to the reef include "large amounts of sediment, nitrogen and pesticide run-off" as well as the crown-of-thorns starfish species, Environment Minister Josh Frydenberg said. The reef can be seen from space and was listed as a world heritage site in 1981 by the United Nations cultural body Unesco. There are 1,052 sites of environmental and cultural importance such as the reef on Unesco's World Heritage List. In 2017, the organisation decided not to place the Great Barrier Reef on its official list of 55 World Heritage sites "in danger". Unprecedented coral bleaching in recent years has caused damage to two-thirds of the reef, aerial surveys in 2017 showed. The new EU satellite tasked with tracking dirty air has demonstrated how it will become a powerful tool to monitor emissions from shipping. Sentinel-5P was launched in October last year and this week completed its in-orbit commissioning phase. But already it is clear the satellite's data will be transformative. This latest image reveals the trail of nitrogen dioxide left in the air as ships move in and out of the Mediterranean Sea. The "highway" that the vessels use to navigate the Strait of Gibraltar is easily discerned by S5P's Tropomi instrument. "You really see a straight line because all these ships follow approximately the same route," explained Pepijn Veefkind, Tropomi's principal investigator from the Dutch met office (KNMI). "In this case, we also looked into how many big ships there are in the region [at the time], and there's really not that many - around 20 or so, we estimate - but each one is putting out a lot of NO2." Nitrogen dioxide is a product of the combustion of fuels, in this instance from the burning of marine diesel. But it is also possible to see in the picture the emissions hanging over major urban areas on land that come from cars, trucks and a number of industrial activities. NO2 will be a major contributor to the poorer air quality people living in those areas experience. Sentinel-5P is the next big step because of its greater sensitivity and sharper view of the atmosphere. "Shipping lanes are something we've seen on previous missions but only after we've averaged a lot of data; so, over a month or a year. But with Tropomi we see these shipping lanes with a single image," Dr Veefkind told BBC News. "The resolution we got from our previous instruments was about 20km by 20km. Now, we've gone down to 7km by 3.5km, and we are thinking of going to even smaller pixels." Enhanced geothermal systems marshal the Earth’s subsurface heat for electricity. Injecting fluid into the ground for geothermal power generation may have caused the magnitude 5.5 earthquake that shook part of South Korea on November 15, 2017. The liquid, pumped underground by the Pohang power plant, could have triggered a rupture along a nearby fault zone that was already stressed, two new studies suggest. If it’s confirmed that the plant is the culprit, the Pohang quake, which injured 70 people and caused $50 million in damages, would be the largest ever induced by enhanced geothermal systems, or EGS. The technology involves high-pressure pumping of cold water into the ground to widen existing, small fractures in the subsurface, creating paths for the water to circulate and be heated by hot rock. The plant then retrieves the water and converts the heat into power. Researchers examined local seismic network data for the locations and timing of the main earthquake, six foreshocks and hundreds of aftershocks to determine whether the temblors might have been related to fluid injections at the Pohang plant. Almost all of the quakes originated just four to six kilometers below surface points that were within a few kilometers of the plant, report geologist Kwang-Hee Kim of Pusan National University in South Korea and colleagues online April 26 in Science. These factors, combined with the lack of seismic activity in the region before the injections, suggest the injections were to blame for the quakes, the researchers found. Breaking down into its initial building blocks is the key to the polymer’s reusabilit. A new kind of plastic can, when exposed to the right chemicals, break down into the same basic building blocks that it came from and be rebuilt again and again. The recyclable material is more durable than previous attempts to create reusable plastics, researchers report April 26 in Science. Designing plastics that can be easily reused is one line of attack against the global plastic waste problem. Only about 10 percent of plastic ever made gets recycled, according to a 2017 study in Science Advances. But the material is so cheap and useful that hundreds of millions of tons of it keeps getting churned out each year. A major impediment to plastic recycling is that most plastics degrade into molecules that aren’t immediately useful. Transforming those molecules back into plastic or into some other product requires many chemical reactions, which makes the recycling process less efficient. And while biodegradable plastics have become popular in recent years, they break down only if the right microbes are present. More often than not, these plastics end up lingering in landfills or floating in the ocean. Creating plastics that could be broken down into their building blocks and reused without additional processing and purifying could help reduce the pollution buildup. But designing such a plastic polymer is a balancing act, says Michael Shaver, a polymer chemist at the University of Edinburgh who wasn’t part of the study. Polymers are long chains of small molecules, called monomers, that link together like beads on a string. Monomers that need extreme temperatures or too much chemical coaxing to join up into polymers might not be practical building blocks. And resulting polymers need to be stable up to a high enough temperature that, say, pouring hot coffee into a cup made of them won’t destabilize the chains and make the plastic melt into a sticky puddle. An earthquake that struck South Korea in 2017 was caused by a geothermal energy project that injected water underground – and risk assessments missed it. South Korea’s most damaging earthquake for a century may have been man-made. Two investigations both conclude that the quake was caused by injections of water deep underground, as part of a project to harness geothermal energy. The findings also suggest that seismologists’ method for estimating how big an earthquake might be caused by pumping water underground is dangerously flawed. Several dozen people were hurt and many buildings damaged in Pohang by the magnitude-5.5 quake in November last year. It was the second most powerful earthquake in South Korea since 1978. Now two independent studies have found that the quake and its main aftershocks were 2 kilometres or less from a site where water was being injected 4 kilometres underground. The goal was to extract energy from underground heat, by injecting water into deep, hot rocks then drawing the heated water up through a second borehole. During the entire project, which ended last September, engineers pumped down around 12,000 cubic metres of water. The drilling operations probably caused the quake, both teams conclude based on seismic and satellite data. “If the Pohang earthquake proves to be human-caused, it would be the largest known associated with deep geothermal energy, and this would certainly impact future projects,” says team member Stefan Wiemer of the Swiss Seismological Service. Henry David Thoreau, after a new study found that the ecosystem of once pristine Walden Pond in Massachusetts has been devastated by “anthropogenic nutrient inputs”—that is, tourist swimmers peeing in the pond. More than 40 companies have signed up to a pact to cut plastic pollution over the next seven years. The firms, which include Coca-Cola and Asda, have promised to honour a number of pledges such as eliminating single-use packaging through better design. They have joined the government, trade associations and campaigners to form the UK Plastics Pact. The signatories are responsible for more than 80% of plastic packaging on products sold through UK supermarkets. One of the promises which companies, such as consumer goods giant Procter & Gamble and Marks & Spencer, have signed up to is to make 100% of plastic packaging ready for recycling or composting by 2025. Led by the sustainability campaign group WRAP, the pact is described as a "once-in-a lifetime opportunity" to rethink plastic both to make use of its value and to stop it damaging the environment. WRAP's chief executive Marcus Gover, said: "This requires a whole scale transformation of the plastics system and can only be achieved by bringing together all links in the chain under a shared commitment to act. "That is what makes the UK Plastics Pact unique. It unites every body, business and organisation with a will to act on plastic pollution. We will never have a better time to act, and together we can." Freshwater acidification was supposed to be a thing of the past, but it’s back and it could be even worse this time. FOR environmentalists of a certain vintage, the words “acid” and “lakes” can stir strangely fond memories. Back in the 1970s and 80s, acid rain from coal-fired power stations was turning lakes across the northern hemisphere into vinegar. Scientists identified the problem, activists campaigned, governments listened. Today, in the West at least, acid rain is largely a thing of the past. But acid lakes are not. Even while many are still recovering from being deluged with acid rain, they face a resumed assault – this time from carbon dioxide. High concentrations of the gas in the atmosphere means more is dissolving in the world’s lakes and rivers. Goodbye sulphuric acid, hello carbonic acid. The new acid invasion shouldn’t come as a surprise. For over a decade, marine biologists have been on alert for the effects of acidifying oceans as rising amounts of atmospheric CO2 dissolve into them. But until now, the parallel acidification of rivers and lakes has largely escaped attention. That changed in January with the publication of the first research to pinpoint freshwater lakes accumulating CO2 from the air, and growing more acidic as a result. “The rate of acidification is really quite fast – three times faster than in the world’s oceans,” says Linda Weiss of the Ruhr University Bochum in Germany, who led the study (Current Biology, vol 28, p 327). That is obviously a cause for concern. Ocean acidification – sometimes known as “the other CO2 problem” – is expected to have severe effects on marine ecosystems. About a third of all the CO2 released into the atmosphere dissolves in seawater and turns into carbonic acid. Since the industrial revolution, the pH of the ocean surface has fallen from 8.16 to 8.05, a 30 per cent increase in the concentration of hydrogen ions. This isn’t a concern yet, but if it continues it will eventually cause some corals and shells to dissolve. Freshwater acidification might turn out to be a trivial problem but we don’t know how much danger aquatic life is in unless we can track down more data. IN 2003, a scary phrase made its debut in the scientific literature. In a paper in Nature, a couple of scientists at the Lawrence Livermore National Laboratory in California cautioned that, over the coming centuries, carbon dioxide released from burning fossil fuels would dissolve in the ocean and make it significantly less alkaline. They called this phenomenon “ocean acidification” and warned that it might have consequences for marine life – while admitting that there was a “paucity of relevant observations”. Fast forward 15 years and we are now sure that ocean acidification will have major impacts on marine biology, especially corals. We also have another new scary phenomenon to contend with: freshwater acidification (see “Lakes of Acid”). This is the ocean problem applied to rivers and lakes. It, too, may have serious effects on aquatic life, though at this early stage there is another paucity of relevant data. Indeed, “paucity” is a word that cautious scientists and climate change deniers alike may use about the evidence for freshwater acidification. Right now, the evidence that it is happening at all is limited to just four reservoirs in the Ruhr region of Germany. Scientists in other places, notably the UK’s Lake District, haven’t seen any sign of it. This could be confirmation of a long-held consensus that, for a variety of complex reasons, fresh water isn’t as susceptible as seawater to acidification by CO2. If so, that would be good news. Planet Earth has enough environmental problems as it is. And while a new one could provide fresh impetus to campaigns, the acidification of a few reservoirs in Germany is not going to make anyone change their ways. Record levels of microplastics have been found trapped inside sea ice floating in the Arctic. Ice cores gathered across the Arctic Ocean reveal microplastics at concentrations two to three times higher than previously recorded. As sea ice melts with climate change, the plastic will be released back into the water, with unknown effects on wildlife, say German scientists. Traces of 17 different types of plastic were found in frozen seawater. Their "plastic fingerprint" suggests they were carried on ocean currents from the huge garbage patch in the Pacific Ocean or arose locally due to pollution from shipping and fishing. More than half of the microplastic particles within the ice were so small that they could easily be ingested by sea life, said Ilka Peeken of the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research in Bremerhaven, Germany, who led the study. "No one can say for certain how harmful these tiny plastic particles are for marine life, or ultimately also for human beings," she said. The ice cores were gathered from five regions throughout the Arctic Ocean in the spring of 2014 and summer of 2015. They were taken back to the laboratory, where they were analysed for their unique plastic "fingerprint". "Using this approach, we also discovered plastic particles that were only 11 micrometres across," said co-researcher Gunnar Gerdts, also from the Alfred Wegener Institute. "That's roughly one-sixth the diameter of a human hair, and also explains why we found concentrations of over 12,000 particles per litre of sea ice - which is two to three times higher than what we'd found in past measurements." The researchers found a total of 17 different types of plastic in the sea ice, including packaging materials like polyethylene and polypropylene, but also paints, nylon, polyester, and cellulose acetate (used to make cigarette filters). They say the plastic found its way to the Arctic Ocean from the huge garbage patch in the Pacific Ocean or from ship's paint and fishing nets. Oil prices hit $75 on Tuesday, the highest level in nearly three and a half years, as fears mounted over the prospect of new US sanctions on Iran. Brent crude jumped for the sixth consecutive day, trading as high as $75.27 before falling back slightly. The US will decide by 12 May whether to abandon a nuclear deal with Iran and re-impose sanctions. Such a move on the third-biggest oil producer in the Opec cartel threatens to further tighten global supplies. Oil prices have been rising since the 14 nations in Opec, as well as other producers including Russia, decided to restrict output last year. In November they agreed to extend those cuts until the end of 2018. Tamas Varga of oil broker PVM said the prospect of President Trump pulling the US out of the nuclear accord that Iran signed with world powers in 2015 was the most significant element of Brent's recent rally. "All bets are off on the US staying in the nuclear agreement," he said. The US president has said that unless European allies fix what he has called "terrible flaws" in the accord by 12 May, he will restore US economic sanctions on Iran. The other nations that signed the deal - the UK, France, Germany, Russia and China - all want to keep in place the agreement, which has halted Iran's nuclear programme in return for most international sanctions being lifted. Restoring US economic sanctions on Iran would be a severe blow to the pact. Stephen Innes of futures brokerage OANDA said new sanctions against Tehran could push oil prices up by as much as $5 a barrel. Two decades after it was first published, the chart linking carbon emissions and global warming is as relevant as ever, says Olive Heffernan. Today is the 20th anniversary of one of the most iconic images in science. On 23 April 1998, US climate scientist Michael Mann and two colleagues published a paper in Nature. Central to it was a graph that would become known as the “hockey stick”. This graph was fairly simple, but its implications were monumental. Unlike any image before, it showed that Earth’s temperature had been relatively stable for 500 years, only to suddenly spike in the 20th century. The hockey stick – named for its long flat line with a sharp upturn – was a strong visual aid that bolstered mounting evidence for human-produced greenhouse gases warming the climate. A year later, Mann and his colleagues extended their analysis further back, showing that the 20th century was hotter than any other time in the past millennium. Seized on by the media, the hockey stick became a global news story. It also became the cornerstone of a bitter political debate, one that would change Mann’s life. Climate sceptics with a vested interest in denying the hockey stick’s central message – that use of fossil fuels is harming the environment – waged an all-out war to discredit Mann and the group’s findings. First, they went after the science. To reconstruct climate going back 500 or 1000 years – long before weather stations or satellites existed – the scientists had to use other indicators of temperature, such as tree rings in long-lived species and ice cores. They also had to develop a new statistical approach to translate such data into annual surface temperatures for the northern hemisphere. As with all scientific analyses, there were uncertainties in the data and Mann and his colleagues had to make judgements on how best to handle them. Sceptics seized on those uncertainties to claim the graph was an artefact, and that using certain proxy data – especially from bristlecone pine trees – biased the results. Then, Mann’s adversaries went after him personally. Writing in Scientific American, Mann described how he was vilified on the editorial pages of The Wall Street Journal, had his e-mails stolen, and has received multiple death threats since the hockey stick article was published. Congress eventually asked the US National Academy of Sciences to weigh in with an independent review. Published in 2006, this endorsed Mann’s findings but had the scientists repeat the analysis with more and better data. Since then, numerous studies have reiterated the hockey stick‘s central conclusion: globally, it’s hotter now than it has been in 1000 years, and according to one analysis, it’s possibly hotter now than it has been for more than 10,000 years. Former New York City Mayor Michael Bloomberg says he will pay $4.5m (£3.2m) to cover some of the lapsed US commitment to the Paris climate accord. He said he had a responsibility to help improve the environment because of President Donald Trump's decision to pull out of the deal. The withdrawal was announced last June and sparked international condemnation. It will make the US in effect the only country not to be part of the Paris accord. The Paris agreement commits the US and 187 other countries to keeping rising global temperatures "well below" 2C above pre-industrial levels. As part of the agreement, the US had pledged $3bn to the Green Climate Fund, set up by the UN to help countries deal with the effects of global warming. The money promised by Mr Bloomberg does not aim to cover this, but the US contribution to the UN's climate change secretariat. "America made a commitment and, as an American, if the government's not going to do it then we all have a responsibility," Mr Bloomberg said on CBS. "I'm able to do it. So, yes, I'm going to send them a cheque for the monies that America had promised to the organisation as though they got it from the federal government." His charity, Bloomberg Philanthropies, offered $15m to cover a separate climate change shortfall last year. It said the money would go to the United Nations Framework Convention on Climate Change (UNFCCC). The founder of a citizens' movement that helped expose the water crisis in Flint, Michigan, is one of the recipients of the prestigious Goldman Environmental Prize. Nearly 100,000 residents of Flint were left without safe tap water after lead began leaching into the supply. Mother of four LeeAnne Walters led a citizens' movement that tested the tap water to expose the health threat. Tests showed lead levels in her water were seven times the acceptable limit. In 2014, the water in Ms Walters' home turned brownish and she noticed rashes on her three-year-old twins. Her daughters' hair then fell out in clumps. Walters spent months reading technical documents about the Flint water system. She then teamed up with environmental engineer Dr Marc Edwards, from Virginia Tech, who helped her conduct extensive water testing in the city. She methodically sampled each zip code in Flint and set up a system to ensure the integrity of the tests, working over 100 hours per week for three weeks. They showed lead levels as high as 13,200 parts per billion in some parts of the city - more than twice the level classified as hazardous waste by the US Environmental Protection Agency (EPA). The contamination was traced to the city switching its water supply away from Detroit's system, which draws from Lake Huron, and beginning instead to draw water from the Flint River. Flint was in a financial state of emergency and this switch was meant to save the city millions of dollars. But the water from the Flint River was more corrosive than Lake Huron's water and the pipes began leaching lead, which is a powerful neurotoxin. The city has since switched back to using Detroit's water system. But Flint continues to wrestle with the aftermath of the crisis, and pipe replacement is ongoing. How can you create public transport in the jungle without polluting it? The isolated Achuar peoples of Ecuador have created an ingenious solution. Since April 2017, a canoe powered solely by solar energy travels back and forth along the 67-km (42-mile) stretch of the Capahuari and Pastaza rivers that connect the nine isolated settlements that live along their banks. The boat Tapiatpia - named after a mythical electric eel in the area - gives the Amazon its first solar powered public transport system. "The solar canoe is an ideal solution for this place because there is a network of interconnected navigable rivers and a great need for alternative transport," says Oliver Utne, a US environmentalist who has been working with the community since 2011. The community previously relied entirely on gasoline canoes, known as peque peques, but they are expensive to run and only owned by a few families per village. The canoe costs passengers just $1 (71p) each per stop, whereas peque peques cost $5-10 in gasoline for the same journey. Gasoline costs five times more here than in the capital Quito because there are no roads and it needs to be flown in. Of course there is an environmental impact too - the canoe means no pollution in one of the world's richest areas of biodiversity. With a roof of 32 solar panels mounted on a traditional canoe design of 16 x 2-metre (52 x 7-feet) fibreglass, Tapiaptia carries 18 passengers. New research examines damage from heat and gives projections for the future. It’s no secret that warming ocean waters have devastated many of the world’s coral reefs. For instance, a 2016 marine heat wave killed 30 percent of coral in the Great Barrier Reef, a study published online April 18 in Nature reports. But some coral species may be able to adapt and survive in warmer waters for another century, or even two, a second team reports April 19 in PLOS Genetics. And that offers a glimmer of hope for future ocean biodiversity. “What we’ve just experienced [in the Great Barrier Reef] is one hell of a natural selection experiment,” says coral reef expert Terry Hughes of James Cook University in Townsville, Australia. In total, about 50 percent of the reef’s corals have died since 2016, he says. A bright side, maybe: “The ones that are left are tougher.” While the marine heat wave particularly damaged staghorn corals (Acropora millepora), this species may ultimately prove to be one of the resilient ones, Mikhail Matz, a biologist at the University of Texas at Austin, and his colleagues report in PLOS Genetics. A new analysis shows the branching, fast-growing coral — a key reef builder — is genetically diverse enough to survive for another 100 to 250 years, depending on how quickly the planet warms. Other studies have suggested coral reefs may not last this century. What happens to coral reefs affects vast underwater ecosystems, and the hundreds of millions of people who depend on those ecosystems for fishing, tourism and more. So scientists want to understand how corals might fare as climate change brings longer and stronger marine heat waves (SN: 4/10/18, p. 5). Long-term experiment finds a surprising flip in the rules for plant photosynthesis. Two major groups of plants have shown a surprising reversal of fortunes in the face of rising levels of carbon dioxide in the atmosphere. During a 20-year field experiment in Minnesota, a widespread group of plants that initially grew faster when fed more CO2 stopped doing so after 12 years, researchers report in the April 20 Science. Meanwhile, the extra CO2 began to stimulate the growth of a less common group of plants that includes many grasses. This switcheroo, if it holds true elsewhere, suggests that in the future the majority of Earth’s plants might not soak up as much of the greenhouse gas as previously expected, while some grasslands might take up more. “We need to be less sure about what land ecosystems will do and what we expect in the future,” says ecosystem ecologist Peter Reich of the University of Minnesota in St. Paul, who led the study. Today, land plants scrub about a third of the CO2 that humans emit into the air. “We need to be more worried,” he says, about whether that trend continues. The two kinds of plants in the study respond differently to CO2 because they use different types of photosynthesis. About 97 percent of plant species, including all trees, use a method called C3, which gets its name from the three-carbon molecules it produces. Most plants using the other method, called C4, are grasses. Most solar cells are limited by how much energy their electrons can absorb. Denting their materials could help them harvest more electricity and breeze past that limit. Putting a dent in solar cells may actually make them more efficient. It could even pave the way to solar cells that break a fundamental limit on how much energy the material can absorb. Solar cells work via the photovoltaic effect, in which light imparts energy to electrons, allowing them to move around and create electrical current. Most modern solar cells place two different types of semiconductor materials next to each other, which directs the electrical current to flow from one material to the other. These solar cells are limited by how much energy the electrons can absorb from sunlight. Too little energy and the electrons don’t absorb any of it, but too much and the extra goes unused. Marin Alexe at the University of Warwick in the UK and his colleagues have come up with a new way to generate energy from sunlight within just one material – and it might be able to bypass make use of more of the sun’s energy. In a single material, electrical current can only flow when the molecular structure is not perfectly symmetrical. In symmetrical materials, electrons can jostle around but there’s nothing directing or organizing their motion to make it useful. Alexe and his colleagues found a way to make any semiconductor into a solar cell: simply break its molecular symmetry. By pressing the rounded tip of an atomic force microscope into a sample of a symmetrical semiconductor, they squeezed some of the molecules closer together. Some of the last great wildernesses are being considered as likely candidates for geoengineering. It's a sad reflection of climate failings, says Olive Heffernan. Do we have any low-risk global geoengineering options ready to deploy now? The answer, according to leading US climatologist Alan Robock, is no. So it is unsurprising that interest is starting to turn to more limited, localised ideas that look less perilous. The latest involve building artificial islands and 100-metre-high walls to prevent a rising tide of melting polar ice. These examples of targeted geoengineering – a new twist on the controversial idea – could prevent the metre or so of sea level rise that is expected to displace millions of coastal dwellers by 2100. Scientists presented the idea to the annual European Geosciences Union meeting last week, a gathering of nearly 15,000 earth and space experts, including Robock, in Vienna, Austria. Most geoengineering schemes would directly intervene in the climate across the planet to counter warming, either by reflecting some of the sunlight reaching Earth or by sucking carbon dioxide out of the atmosphere. But the conclusion in Vienna was that their risks are too high, and in some cases too uncertain, to consider them safe to deploy now on a meaningful scale. According to studies presented at the meeting, solar geoengineering could save corals from bleaching and permafrost from melting, for example, but it would also heighten flood risk from torrential rain in Europe and North America. A crisis of plastic waste in Indonesia has become so acute that the army has been called in to help. Rivers and canals are clogged with dense masses of bottles, bags and other plastic packaging. Officials say they are engaged in a "battle" against waste that accumulates as quickly as they clear it. The commander of a military unit in the city of Bandung described it as "our biggest enemy". Like many rapidly developing countries, Indonesia has become notorious for struggling to cope with mountains of rubbish. A population boom has combined with an explosive spread of plastic containers and wrapping replacing natural biodegradable packaging such as banana leaves. The result is that local authorities trying to provide rubbish collection have been unable to keep up with the dramatic expansion of waste generated. And a longstanding culture of throwing rubbish into ditches and streams has meant that any attempt to clean up needs a massive shift in public opinion. In Bandung, Indonesia's third largest city, we witnessed the shocking sight of a concentration of plastic waste so thick that it looked like an iceberg and blocked a major tributary. Soldiers deployed on a barge used nets to try to extract bags, Styrofoam food boxes and bottles, a seemingly futile task because all the time more plastic flowed their way from further upstream. To encourage recycling, the authorities in the Bandung area are supporting initiatives in "eco-villages" where residents can bring old plastic items and earn small amounts of money in exchange. The plastics are then divided by type. In one project we visited, two women patiently cut apart bottles and small water cups because separating the different kinds of polymers earns higher prices. Scientists’ tweak led to more breakdown of plastics found in polyester and plastic bottles. Just a few tweaks to a bacterial enzyme make it a lean, mean plastic-destroying machine. One type of plastic, polyethylene terephthalate, or PET, is widely used in polyester clothing and disposable bottles and is notoriously persistent in landfills. In 2016, Japanese scientists identified a new species of bacteria, Ideonella sakaiensis, which has a specialized enzyme that can naturally break down PET. Now, an international team of researchers studying the enzyme’s structure has created a variant that’s even more efficient at gobbling plastic, the team reports April 17 in Proceedings of the National Academy of Sciences. The scientists used a technique called X-ray crystallography to examine the enzyme’s structure for clues to its plastic-killing abilities. Then, they genetically tweaked the enzyme to create small variations in the structure, and tested those versions for PET-degrading performance. Some changes made the enzyme work even better. Both the original version and the mutated versions could break down both PET and another, newer bio-based plastic called PEF, short for polyethylene-2,5-furandicarboxylate. With a little more engineering, these enzymes could someday feast at landfills. Prolonged ocean warming events, known as marine heatwaves, take a significant toll on the complex ecosystem of the Great Barrier Reef. This is according to a new study on the impacts of the 2016 marine heatwave, published in Nature. In surveying the 3,863 individual reefs that make up the system off Australia's north-east coast, scientists found that 29% of communities were affected. In some cases up to 90% of coral died, in a process known as bleaching. This occurs when the stress of elevated temperatures causes a breakdown of the coral's symbiotic relationship with its algae, which provide the coral with energy to survive, and give the reef its distinctive colours. Certain coral species are more susceptible to this heat-induced stress, and the 2016 marine heatwave saw the death of many tabular and staghorn corals, which are a key part of the reef's structure. Researchers led by Terry Hughes at Australia's ARC Centre of Excellence for Coral Reef Studies looked at aerial observations of the entire 2,300km reef between March and November 2016. These were combined with underwater surveys at over 100 locations. "We saw some corals rapidly dying," explained Dr Scott Heron, another of the study's authors. "Bleaching... is essentially a starvation process that occurs over one to two months. This rapid onset is not the same starvation mechanism. The best way to describe it is akin to cooking," added the Noaa Coral Reef Watch scientist. They found that these "cooked" corals were dying within two to three weeks. The northern section of the reef, some 700km long, was worst affected, with 50% of the coral cover in the reef's shallowest areas being lost within eight months. The reef has been so severely damaged by record ocean heat that it has had no chance to recover fully - and may never be the same again. THE Great Barrier Reef has been so severely damaged by record ocean heat that it will never be the same again in our lifetimes or those of our grandchildren. With ever hotter ocean heatwaves set to occur every few years, the reef will have no chance to recover fully. “In 30 years’ time, we’ll still have a reef, but it will look very different,” says Terry Hughes at James Cook University in Australia, whose team has conducted surveys of the reef to assess the damage. We already knew that the iconic reef was badly damaged by recent heat events. Hughes’s surveys show that the corals started dying at far lower levels of heat stress than expected. They also show that the structure of a third of the 4000 individual reefs that make up the Great Barrier Reef has been degraded, altering ecosystems. The current damage began with a fierce ocean heatwave in early 2016, which directly killed many corals. Overall, 30 per cent of coral cover was lost, making it the worst die-off on record. A second heatwave at the start of 2017 then killed another 20 per cent. While some areas have recovered, corals are still dying in the worst-hit regions. Alarmingly, the corals’ tolerance of short periods of very high sea temperatures or of longer periods of less severe heat was just half as much as forecast by NASA and other research teams (Nature, doi.org/cngq). The corals also died faster than predicted. After sea surface temperatures reached record levels in March 2016, for example, millions of corals perished in just two weeks. “They simply cooked,” says Hughes. The UK's biggest coffee chain Costa Coffee has said it will recycle as many disposable cups as it sells by 2020 in a "cup recycling revolution". Under the scheme, 500 million coffee cups a year would be recycled, including some sold by rivals, it said. It will encourage waste collection firms to collect the cups by paying them a supplement of £70 a tonne. About 2.5 billion disposable coffee cups are thrown away each year in the UK and 99.75% are not recycled. They have a mixture of paper and plastic in their inner lining - designed to make them both heat- and leak-proof. Environmental campaigners have welcomed Costa's move. Costa managing director Dominic Paul told the BBC the move was "a cup-recycling revolution". "By the end of 2020, we'll effectively be cup-neutral. We'll be recycling as many cups as we put into the system," he said. Costa said "misconceptions" had arisen about whether a coffee cup could be recycled because of the plastic layer, which had "previously been considered difficult to separate". If one part of an ice shelf starts to thin, it can trigger rapid ice losses in other regions as much as 900 kilometres away – contributing to sea level rise. The thinning of one part of an ice shelf can speed up ice movement in another part of the ice shelf up to 900 kilometres away, a computer model suggests. The finding is concerning because many ice shelves are already being thinned by warm sea water flowing beneath them. Ronja Reese of the Potsdam Institute for Climate Impact Research in Germany has been using a computer model of ice shelves to explore the consequences of this thinning. Her team recently ran simulations to see what happens when ice shelves thin by 1 to 10 metres over areas of 20 by 20 kilometres. According to their results, even such highly localised thinning can have immediate impacts hundreds of kilometres away, Reese told a meeting of the European Geosciences Union in Vienna last week. For example, in the model, thinning at the western coast of the Ross Ice-Shelf near Ross Island immediately causes an increased outflow of ice from the Bindschadler Ice Stream, located more than 900 kilometres across the ice shelf. Because ice shelves float on the ocean, sea level does not rise as they thin. However, ice-shelves hold back land-based glaciers flowing into the ocean. Some glaciers in the Antarctic are already speeding up and dumping more ice in the sea, thereby raising global sea levels. Plants in the UK are set to blaze into flower virtually simultaneously, because flowering has been delayed two weeks by the unusually cold weather. UK gardens are likely to be ablaze with colour this week as plants all break into flower simultaneously. This “condensed spring” follows much dismal weather: the UK spring has seen snow, twice the usual amount of rainfall and temperatures that are below average. “Cold has held spring back by two weeks, so suddenly everything will come out in a rush,” says Guy Barter at the Royal Horticultural Society, which has forecast the condensed spring. Plants need a period of cold to kick-start genetic programs for flowering. “It’s like a sort of dosing,” says Elizabeth Wolkovich at Harvard University. “Each day brings a plant some dose of cold or warmth, and once they’ve got the full dose of the two requirements they can flower.” Warmer winters caused by climate change could pose more of a problem for certain plants than cold snaps. In 2012, Wolkovich found that some plants are delaying flowering because warm winters don’t supply enough cold. That could harm these species and animals that rely on them. A type of plankton described as part of "the beating heart" of the oceans has been named after the BBC's Blue Planet series. The tiny plant-like organism is regarded as a key element of the marine ecosystem. Scientists at University College London (UCL) bestowed the honour on Sir David Attenborough and the documentary team. It's believed to be the first time a species has been named after a television programme. A single-celled algae, the plankton was collected in the South Atlantic but is found throughout the world's oceans. It will now be officially known as Syracosphaera azureaplaneta, the latter translating from the Latin as 'blue planet'. During a visit to UCL to receive the honour, Sir David said it was "a great compliment" and he was delighted that it would help raise awareness of the importance of plankton to the oceans. "If you said that plankton, the phytoplankton, the green oxygen-producing plankton in the oceans is more important to our atmosphere than the whole of the rainforest, which I think is true, people would be astonished. "They are an essential element in the whole cycle of oxygen production and carbon dioxide and all the rest of it, and you mess about with this sort of thing and the echoes and the reverberations and the consequences extend throughout the atmosphere." The Blue Planet plankton is only about 10 microns across - the diameter of a typical human hair is about seven times greater. It only lives for a few days but in that brief time creates shapes of incredible intricacy and beauty. Scientists have improved a naturally occurring enzyme which can digest some of our most commonly polluting plastics. PET, the strong plastic commonly used in bottles, takes hundreds of years to break down in the environment. The modified enzyme, known as PETase, can start breaking down the same material in just a few days. This could revolutionise the recycling process, allowing plastics to be re-used more effectively. UK consumers use around 13 billion plastic drinks bottles a year but more than three billion are not recycled. Originally discovered in Japan, the enzyme is produced by a bacterium which "eats" PET. Ideonella sakaiensis uses the plastic as its major energy source. Researchers reported in 2016 that they had found the strain living in sediments at a bottle recycling site in the port city of Sakai. "[PET] has only been around in vast quantities over the last 50 years, so it's actually not a very long timescale for a bacteria to have evolved to eat something so man-made," commented Prof John McGeehan, who was involved in the current study. Polyesters, the group of plastics that PET (also called polyethylene terephthalate) belongs to, do occur in nature. "They protect plant leaves," explained the University of Portsmouth researcher. "Bacteria have been evolving for millions of years to eat that." The switch to PET was nevertheless "quite unexpected" and an international team of scientists set out to determine how the PETase enzyme had evolved. Warm mountain winds are causing extensive winter melting on the surface of the Larsen C ice shelf, which could contribute to its breakup. The average winter temperature on the Antarctic peninsula is a chilly -15°C. Yet automated instruments on the Larsen C ice shelf have recorded extensive surface melting even during the long, dark winter. When wind blows over high mountains, the descending air can warm by several degrees. On the Antarctic peninsula, this phenomenon – known as a foehn wind – can sometimes raise air temperature above zero. This was known to happen during summer but has now been found to be occurring even in mid-winter. As the peninsula continues to warm, it will happen more and more often. “We can thus expect more winter melt this century,” Peter Kuipers Munneke of Utrecht University in the Netherlands told a meeting of the European Geosciences Union in Vienna this week. This winter melting is likely helping to destabilise the Larsen C ice shelf, which lost a huge chunk last year. Surface melting is thought to have played a big part in the breakup of the nearby Larsen B ice shelf in 2002. Kuipers Munneke and his colleagues made their discovery after installing an automated weather station in Cabinet Inlet, a region of the Larsen C ice shelf, in 2015. The station has instruments that can detect snow melt. They were surprised to discover extensive winter melting often lasting several days. “Over the three-year period, up to 25 per cent of the melt was happening in winter,” said Kuipers Munneke. “Peak intensities of this winter melt even exceed summertime values.” The findings will soon be published in Geophysical Research Letters. 4-13-18 Carbon-free shipping is possible, so why aren’t we doing it? New UN-agreed limits on carbon emissions from shipping don’t go far or fast enough, especially as we already have the tech to make shipping carbon-free. Ships produce more than 2 per cent of the carbon emissions warming the planet. According to some estimates, those emissions could triple by 2050 if nothing is done. And until now, next to nothing has been done. Shipping, along with aviation, has been excluded from climate agreements. But delegates at the International Maritime Organization (IMO), the UN agency that regulates international shipping, have just agreed on a target of reducing the sector’s emissions by at least 50 per cent by 2050. This sounds like great news, but island states and some European countries wanted cuts of up to 100 per cent by the same deadline. “Today the IMO has made history,” said the president of the Marshall Islands, Hilda Heine. “While it may not be enough to give my country the certainty it wanted, it makes it clear that international shipping will now urgently reduce emissions.” Surprisingly, stricter cuts are actually feasible. While curbing aviation emissions remains a huge technical challenge, ships are easier. In fact, last month an Organisation for Economic Co-operation and Development report concluded that with full deployment of existing technologies alone, shipping emissions could be cut 95 per cent by 2035. How? The first thing is to change the way ships operate. For example, reducing ship speeds could deliver fuel savings of up to two-thirds. While this sounds easy, it would reduce owners’ annual profits, so they won’t do it voluntarily. A significant shift in the system of ocean currents that helps keep parts of Europe warm could send temperatures in the UK lower, scientists have found. They say the Atlantic Ocean circulation system is weaker now than it has been for more than 1,000 years - and has changed significantly in the past 150. The study, in the journal Nature, says it may be a response to increased melting ice and is likely to continue. Researchers say that could have an impact on Atlantic ecosystems. Scientists involved in the Atlas project - the largest study of deep Atlantic ecosystems ever undertaken - say the impact will not be of the order played out in the 2004 Hollywood blockbuster The Day After Tomorrow. But they say changes to the conveyor-belt-like system - also known as the Atlantic Meridional Overturning Circulation (Amoc) - could cool the North Atlantic and north-west Europe and transform some deep-ocean ecosystems. That could also affect temperature-sensitive species like coral, and even Atlantic cod. Scientists believe the pattern is a response to fresh water from melting ice sheets being added to surface ocean water, meaning those surface waters "can't get very dense and sink". "That puts a spanner in this whole system," lead researcher Dr David Thornalley, from University College London, explained. The concept of this system "shutting down" was featured in The Day After Tomorrow. "Obviously that was a sensationalised version," said Dr Thornally. "But much of the underlying science was correct, and there would be significant changes to climate it if did undergo a catastrophic collapse - although the film made those effects much more catastrophic, and happening much more quickly - than would actually be the case." Researchers have found lakes that may shed new light on icy worlds in our Solar System. High in the Canadian Arctic, two subglacial bodies of water have been spotted beneath over 500 metres of ice. The water has an estimated maximum temperature of -10.5C, and would need to be very salty to avoid freezing. There are thought to be similar cold, saline conditions in the subsurface ocean of Jupiter's moon Europa, yet also the potential to host life. The findings, from a team led by the University of Alberta, have been published in Science Advances. The two lakes appeared in a radar survey of the Devon Ice Cap, which sits on Devon Island, in Canada's northern Nunavut territory. "I was super surprised, and a little bit puzzled," Anja Rutishauser, the study's lead author, said of the discovery. "I was definitely not looking for subglacial lakes." Although water systems beneath large ice sheets are being found to be increasingly common, Devon Island's ice cap was thought to be frozen to the bedrock beneath. These are the first subglacial lakes to be observed in the Canadian Arctic, and are estimated to cover areas of five and eight square kilometres respectively. "It's an amazing finding, and one that I really wasn't expecting from the geophysical survey of this small ice cap," commented Prof Martin Siegert from Imperial College London, who was not involved in the study. "To my knowledge, this is a unique lake system. Of the [more than] 400 subglacial lakes in Antarctica, all of them are thought to comprise fresh water. Hence, whatever might be living in it may also be unique," he added. The world’s first ranking of tsunami risks for major tourist beaches shows popular spots like Hawaii and Bali are most in danger. Terrified of tsunamis? Maybe cancel those holidays in Hawaii, Bali or Phuket. They’re all among the top 10 major tourist beaches deemed most at risk of the big waves. “Hawaii is number one because of all the tsunamis that can come from the frequent ‘ring-of-fire’ earthquakes zones, from Japan, Alaska, South America and other regions,” said Andreas Schaefer of the Karlsruhe Institute of Technology in Germany, who has developed the first ever ranking of beach tsunami risk. “Phuket was also among the most prominent at-risk destinations, as was Bali, and parts of Turkey.” Schaefer compiled his ranking using historic data on 10,000 of the largest recorded tsunamis coupled with earthquake and seismic activity in 54 subduction zones. He used these figures to batter the 24,000 most popular tourist beaches with virtual tsunamis, then cross referenced with revenue data from governments and tourism operators to work out the potential economic losses at each beach should they encounter real tsunamis. The rankings reflect likely economic rather than human losses. Typically, tsunamis can ruin beaches and their associated infrastructure, sometimes permanently, by covering them in mud, or washing away sand. And sometimes serious tsunamis can destroy economies by scaring away tourists. Following the Indonesian tsunami in 2004 which killed 228,000 people, 20 per cent of beach resorts closed in the Maldives, while in Phang Nga and Phuket in Thailand, two thirds and a quarter of hotels respectively closed within six months of the disaster. The record-breaking 2017 wildfires in the US generated massive thunderstorms that pumped as much smoke into the stratosphere as a volcanic eruption. The wildfires that raged in northwest America last August were so ferocious that they had the same effect on the planet as a volcanic eruption. The heat and smoke from the fires led to the formation of massive thunderstorms known as pyrocumulonimbus. These storms, called pyroCbs for short, pumped the smoke from the fires so high in the atmosphere that it spread over the entire northern hemisphere and remained there for months, until November and December. It was by far the largest event like this ever recorded. “This was the mother of all pyroCbs,” said David Peterson of the US Naval Research Laboratory in Monterey, California, who presented his team’s finding this week at a meeting of the European Geosciences Union in Vienna. With 2017 being a record year for wildfires in the US, the worry is that this phenomenon will become more common as the planet warms. PyroCbs form from wildfires when conditions are right for the hot air and smoke to generates clouds, which can sometimes develop into a full-blown thunderstorm. “The difference is that the thunderstorm is driven by fire heat, and you end up a very dirty thunderstorm,” said Peterson. Worse still, the smoke can sometimes reach the lower stratosphere where it can spread long distances, as last year. “It’s like a great chimney taking smoke to high altitudes,” said Peterson. The relentless campaign to find and sink Germany's WWII battleship, the Tirpitz, has left its mark on the landscape that is evident even today. The largest vessel in Hitler's Kriegsmarine, it was stationed for much of the war along the Norwegian coast to deter an Allied invasion. The German navy would hide the ship in fjords and screen it with chemical fog. This "smoke" did enormous damage to the surrounding trees which is recorded in their growth rings. Claudia Hartl, from the Johannes Gutenberg University in Mainz, Germany, stumbled across the impact while examining pines at Kåfjord near Alta. The dendrochronologist was collecting wood cores to build up a picture of past climate in the area. Severe cold and even infestation from insects can severely stunt annual growth in a stand, but neither of these causes could explain the total absence of rings seen in some trees dated to 1945. A colleague suggested it could have something to do with the Tirpitz, which was anchored the previous year at Kåfjord where it was attacked by Allied bombers. Archive documents show the ship released chlorosulphuric acid to camouflage its position. "We think this artificial smoke damaged the needles on the trees," Dr Hartl told BBC News. "If trees don't have needles they can't photosynthesise and they can't produce biomass. In pine trees, needles usually last from three to seven years because they're evergreens. So, if the trees lose their needles, it can take a very long time for them to recover." In one tree, there is no growth seen for nine years from 1945. "Afterwards, it recovered but it took 30 years to get back to normal growth. It's still there; it's still alive, and it's a very impressive tree," Dr Hartl said. In other pines, rings are present but they are extremely thin - easy to miss. As expected, sampling shows the impacts falling off with distance. But it is only at 4km that trees start to display no effects. The Tirpitz sustained some damage at Kåfjord. However, a continuous seek-and-destroy campaign eventually caught up with the battleship and it was sunk by RAF Lancasters in late 1944 in Tromso fjord further to the west. 4-10-18 Ocean heat waves are becoming more common and lasting longer. The extreme events can kill corals and kelp and throw marine ecosystems into chaos. The world’s oceans are sweltering. Over the last century, marine heat waves have become more common and are lasting longer. New research suggests the annual number of days that some part of the ocean is experiencing a heat wave has increased 54 percent from 1925 to 2016, researchers report April 10 in Nature Communications. Typically, scientists define a marine heat wave as at least five consecutive days of unusually high temperatures for a particular ocean region or season. These extreme temperatures can be lethal for marine species such as corals, kelp and oysters, and can wreak havoc on fisheries and aquaculture (SN: 2/3/18, p. 16). In the new study, the researchers searched for such events recorded in sea surface temperature data recorded as far back as 1900 and in satellite data since 1982. Not only have the heat waves become 34 percent more common on average, but they also last an average 17 percent longer, the team found. That trend is mostly influenced by climate change causing surface ocean waters to warm, rather than by large atmosphere-ocean climate patterns, such as the periodic warming and cooling of waters in the equatorial Pacific called the El Niño-Southern Oscillation. The researchers predict even more frequent marine heat waves in coming decades. A 10 per cent rise in snowfall in Antarctica is adding more ice to the continent each year, but the ice sheets are still shrinking because it's being lost faster too. Snowfall in Antarctica has increased by 10 per cent since 1800, an analysis of ice cores from Antarctica has revealed. An increase in snowfall has long been predicted as a result of global warming. “A warming atmosphere is wetter, producing more precipitation,” says team leader Liz Thomas of the British Antarctic Survey, who presented the findings today at a meeting of the European Geosciences Union in Vienna, Austria. In fact, it used to be thought that increased snowfall in the Antarctica would more than counter any ice loss due to warming. Early IPCC reports forecast that the ice sheets of Antarctica would grow over the 21 century. But gravity-measuring satellites have shown that the continent’s ice sheets have been losing mass since at least 2002. These vast ice sheets are made of the snow that has fallen in Antarctica over the past million years or so. As the snow builds up, it is gradually compressed and turned into ice. To find out how snowfall has changed recently, Thomas and her colleagues analysed 79 ice cores from across the Antarctic, most of which went back at least 200 years. For the whole of Antarctica, they found that 10 per cent more snow falls now than 200 years ago, an average difference per decade of 272 gigatonnes of water, says Thomas. Her team has already published some of the results, with more to follow soon. Media watchdog Ofcom has rebuked the BBC over a radio interview with climate change sceptic Lord Lawson last August. It found that Radio 4's Today programme had breached broadcasting rules by "not sufficiently challenging" the former chancellor of the exchequer. The BBC has admitted the item broke its guidelines and said Lord Lawson should have been challenged "more robustly". It is the first time Ofcom has found the BBC in breach since taking over regulation of the corporation in 2017. "Statements made about the science of climate change were not challenged sufficiently during this interview, which meant the programme was not duly accurate," said an Ofcom spokeswoman on Monday. In the interview aired on 10 August last year, the ex-chancellor claimed "official figures" showed average world temperatures had "slightly declined". He also claimed the UN's Intergovernmental Panel on Climate Change (IPCC) had confirmed there had not been an increase in extreme weather events for the last 10 years. This view, shown to be false by the Met Office, was not challenged on air by presenter Justin Webb. In its ruling, the broadcasting regulator ruled there was "clear editorial justification for the topic of climate change to be covered". "However, in doing so the BBC needed to ensure that the topic was reported with due accuracy and due impartiality." "The programme did not clearly signal to listeners that [Lord Lawson's] view on the science of climate change ran counter to the weight of scientific opinion in this field," Ofcom continued. Talks on the global shipping industry cutting greenhouse gases have opened with a passionate plea for action. A minister from the Marshall Islands warned that the future of his low-lying Pacific country was at stake. The shipping industry generates more than 2% of global CO2 emissions but that's projected to increase rapidly. More than 100 countries are meeting at the International Maritime Organisation in London to try to agree on a new policy. Battle lines are drawn between countries determined to see deep cuts in shipping's greenhouse gases and those that fear that rapid limits could damage development. Shipping was exempted from the Paris Agreement because it involves an international activity and the agreement was based on a system of national targets. But the industry currently produces a higher level of carbon emissions than Germany and, if it was ranked as a country, it would be the sixth largest emitter on the planet. Speaking to the gathering of more than 1,000 diplomats and shipping industry executives, David Paul, environment minister of the Marshall Islands, said that shipping was a major source of income for his country which had the second largest number of ships registered. But he said the economic gains of protecting one sector would be "far outweighed" by the costs of failing to achieve the limits in temperature rise set out in the Paris Agreement. "There will be nothing more devastating to global trade than the cost of having to try to adapt to a world that is on average two, three or four degrees warmer," Mr Paul told delegates. And he said that the argument that climate action could undermine economic growth was "completely and utterly false". Understanding Arctic ponds can help us predict how fast the ice is melting. Their formation is governed by the simple maths of drawing overlapping circles. A simple model of the patterns formed by ponds could help us make better predictions about how Arctic ice is melting. Arctic sea ice has been melting faster than expected. One factor that might contribute to this is the way ponds trap heat. When ice melts and forms ponds on the surface of sea ice, it becomes less reflective, trapping more heat and making the ice melt faster. That leads to a positive feedback mechanism: melting creates ponds, which results in faster melting. Previous studies have shown that the fraction of the ice covered by ponds in the spring can predict how much sea ice will be left at the end of the summer each year. The geometry of the ponds influences how the ice around them melts: for example, small, long ponds will grow sideways by melting more ice faster than large, symmetrical ponds. Predrag Popovic at the University of Chicago and colleagues modelled the patterns created by ponds by randomly drawing overlapping circles of varying sizes on a plane. They analysed hundreds of photographs of sea ice taken during helicopter flights to compare their model with real data. 4-5-18 More severe heat waves will broil the U.S. A new study traces the contamination of fertilizer back to household and supermarket waste. Composting waste is heralded as being good for the environment. But it turns out that compost collected from homes and grocery stores is a previously unknown source of microplastic pollution, a new study April 4 in Science Advances reports. This plastic gets spread over fields, where it may be eaten by worms and enter the food web, make its way into waterways or perhaps break down further and become airborne, says Christian Laforsch, an ecologist at the University of Bayreuth in Germany. Once the plastic is spread across fields, “we don’t know its fate,” he says. That fate and the effects of plastic pollution on land and in freshwater has received little research attention compared with marine plastic pollution, says ecologist Chelsea Rochman of the University of Toronto. Ocean microplastics have gained notoriety thanks in part to coverage of the floating hulk of debris called the great Pacific garbage patch (SN Online: 3/22/18). But current evidence suggests that plastic pollution is as prevalent in land and freshwater ecosystems as it is in the oceans, where it’s found “from the equator to the poles,” says Rochman, author of a separate commentary on the state of plastic pollution research published in the April 6 Science. Plastic “is seen in the high Arctic, where we suspect it comes down in rain. We know it’s in drinking water, in our seafood and spread on our agricultural fields,” she says. Spending on renewables in developed countries has halved since 2011, with investment levels in Europe falling back below the 2006 level. The world added more solar capacity in 2017 than all new coal, gas and nuclear electricity-generating plants combined. That’s the headline conclusion of a report on how much banks, private investors and utility companies invested in renewables last year. While that sounds promising, on closer examination there are some worrying numbers in the same report. They reveal that in most of the world, investment over the past few years has either changed little or fallen, often because of cutbacks in subsidies – showing that despite getting ever cheaper, wind and solar remain heavily dependent on government support. In fact, investment in the developed countries whose emissions have caused most of the global warming so far has halved since 2011, to $103 billion. Most shocking is what is happening in Europe, which is meant to be leading the world in tackling climate change. There investment peaked at $126bn in 2011 and has now fallen to $41bn. The global figures would look quite grim were it not for the astounding efforts of China, where investment in renewables has soared over the last decade to hit a record $127bn last year. This means that in China alone, investors are now pouring more money into solar and wind power than in all the developed countries combined. It’s important to point out that because the cost of building wind farms and solar plants has fallen sharply, every buck spent today creates far more electricity-generating bang than a decade ago. But if investment in developed countries had remained at 2011 levels, the world would be getting a lot more of its electricity from renewable sources than it is now. And that matters. Despite the $3 trillion spent globally since 2004, just 12 per cent of the world’s electricity came from renewable sources in 2017, compared with 5 per cent in 2005 (these figures exclude large hydroelectric schemes and nuclear plants). This is projected to rise to 34 per cent by 2040, says the lead author of the report, Angus McCrone of Bloomberg New Energy Finance. A study of litter in UK seas shows the number of plastic bags has fallen, amid a rise in other types of plastic rubbish. The authors say this could be due to several things - the introduction of charges for plastic bags across Europe, manufacturing changes and changes in ocean dynamics. The research found a rise in the proportion of fishing debris. Some of the plastic debris is likely to be coming from outside the UK. The reduced proportion of plastic bags in marine litter was found from 2010 onwards. There was a drop of around 30% from the pre-2010 period compared with afterwards. If charging is a potential contributor, the downward trend could suggest that policies can affect the amount and distribution of certain marine litter items on short timescales. But in their scientific paper, they add that this point is controversial. A change in the composition of plastic bags, which may speed up the rate at which they decompose, could also be another factor. Co-author Thomas Maes, who is a marine litter scientist at the government's Centre for Environment, Fisheries and Aquaculture Science (Cefas), said: "It is encouraging to see that efforts by all of society, whether the public, industry, NGOs or government to reduce plastic bags are having an effect. "We observed sharp declines in the percentage of plastic bags as captured by fishing nets trawling the seafloor around the UK compared to 2010 and this research suggests that by working together we can reduce, reuse and recycle to tackle the marine litter problem." A UK levy of 5p per bag was introduced in 2015. A congestion charge in Stockholm not only cut levels of air pollution, it halved the number of children admitted to hospital with asthma attacks. Children’s health can benefit from congestion charging schemes that limit city-centre traffic and the airborne pollution it generates. But a comparison of the congestion charges in London and Stockholm suggests the schemes only achieve this if they drive down the amount of nitrogen dioxide belched into city air by vehicles. Emilia Simeonova of Johns Hopkins University in Baltimore, Maryland and her colleagues tracked air pollution levels in Stockholm, Sweden from 2004 to 2010. In 2007 the city introduced a congestion charge. Levels of nitrogen dioxide fell by 5-7.5 per cent. Nitrogen dioxide is the most harmful pollutant from vehicle exhausts. It aggravates asthma and other respiratory ailments. The reduction in nitrogen dioxide in Stockholm appeared to benefit children. Before the congestion charge, 18.7 children in 10,000 were admitted to hospital with asthma attacks. Afterwards the number halved to 8.7 per 10,000 (National Bureau of Economic Research, doi.org/cm2d). Re-creation of the river’s 500-year flood history shows the worst floods are bigger than ever.The world’s longest system of levees and floodways, meant to rein in the mighty Mississippi River, may actually make flooding worse. Using tree rings and lake sediments, researchers re-created a history of flooding along the lower Mississippi River extending back to the 1500s. This paleoflood record suggests that the past century of river engineering — intended to minimize flood damage to people living along the river’s banks — has instead increased the magnitude of the largest floods by 20 percent, the researchers report April 5 in Nature. Climate patterns that bring extra rainfall to the region don’t account for the dramatic increase in flood size, the team found. “The obvious culprit is that we have really modified the river itself,” says Samuel Munoz, a geoscientist at Northeastern University in Boston. Settlers built the first levees on the Mississippi in the early 1800s. After a massive flood displaced hundreds of thousands of people in 1927, the U.S. government built the current system of spillways and levees. The engineering projects profoundly altered the river’s shape and sediment content. But how these changes affected the size of the river’s largest floods has been unclear, in part because water gauges have tracked the river’s flow for just 150 years. 4-3-18 Are we ready for the deadly heat waves of the future? When days and nights get too hot, city dwellers are the first to run into trouble. Some victims were found at home. An 84-year-old woman who’d spent over half her life in the same Sacramento, Calif., apartment died near her front door, gripping her keys. A World War II veteran succumbed in his bedroom. Many died outside, including a hiker who perished on the Pacific Crest Trail, his water bottles empty. The killer? Heat. Hundreds of others lost their lives when a stifling air mass settled on California in July 2006. And this repeat offender’s rap sheet stretches on. In Chicago, a multiday scorcher in July 1995 killed nearly 700. Elderly, black residents and people in homes without air conditioning were hardest hit. Europe’s 2003 heat wave left more than 70,000 dead, almost 20,000 of them in France. Many elderly Parisians baked to death in upper-floor apartments while younger residents who might have checked in on their neighbors were on August vacation. In 2010, Russia lost at least 10,000 residents to heat. India, in 2015, reported more than 2,500 heat-related deaths. Year in and year out, heat claims lives. Since 1986, the first year the National Weather Service reported data on heat-related deaths, more people in the United States have died from heat (3,979) than from any other weather-related disaster — more than floods (2,599), tornadoes (2,116) or hurricanes (1,391). Heat’s victim counts would be even higher, but unless the deceased are found with a fatal body temperature or in a hot room, the fact that heat might have been the cause is often left off of the death certificate, says Jonathan Patz, director of the Global Health Institute at the University of Wisconsin–Madison. Deep seafloor troughs allow warm water to eat away at the ice from below, speeding shrinkage. Greenland is melting rapidly, but some glaciers are disappearing faster than others. A new map of the surrounding seafloor helps explain why: Many of the fastest-melting glaciers sit atop deep fjords that allow Atlantic Ocean water to melt them from below. Researchers led by glaciologist Romain Millan of the University of California, Irvine analyzed new oceanographic and topographic data for 20 major glaciers within 10 fjords in southeast Greenland. The mapping revealed that some fjords are several hundred meters deeper than simulations of the bathymetry suggested, the researchers report online March 25 in Geophysical Research Letters. These troughs allow warmer and saltier waters from deeper in the ocean to reach the glaciers and erode them. Other glaciers are protected by shallow sills, or raised seafloor ledges. These sills act as barriers to the deep, warm water, the new seafloor maps show. The researchers compared their findings with observations of glacier melt from 1930 to 2017, and found that the fastest-melting glaciers tended to be those more exposed to melting from below. Governments are dithering over whether to limit climate change to 1.5°C or 2°C, but it seems the stricter target would avoid food shortages and major economic losses. Sometimes it’s good to over-reach – particularly when it comes to stopping climate change. New evidence comparing the impacts of 1.5°C and 2°C rises in temperature reveal the unprecedented food shortages, economic inequality and species loss that will occur if we don’t aim for the more ambitious target. In 2015, global leaders signed up to the Paris Agreement: a commitment to keep global warming under 2°C and possibly even limit it to 1.5°C. Comparisons between the two targets show they have dramatically different impacts. For example, several regions are predicted to reach unprecedented levels of food insecurity, due to increased flooding and drought as a consequence of global warming. For three-quarters of the countries assessed, this increase is larger at 2°C than 1.5°C. The most vulnerable regions are sub-Saharan Africa and South Asia. Meanwhile, the global average GDP per capita is projected to be 5 per cent lower at the end of the century under 2°C warming relative to 1.5°C, and 13 per cent lower than under no additional warming. This economic loss will be felt most strongly by low-income countries, creating greater global inequality. However, a rise of 1.5°C compared with 2°C would see an additional 5.5 per cent of the globe able to act as a “climate refuge” for plants and animals. Scientists now have their best view yet of where Antarctica is giving up ground to the ocean as some its biggest glaciers are eaten away from below by warm water. Researchers using Europe's Cryosat radar spacecraft have traced the movement of grounding lines around the continent. These are the places where the fronts of glaciers that flow from the land into the ocean start to lift and float. The new study reveals an area of seafloor the size of Greater London that was previously in contact with ice is now free of it. The report, which covers the period from 2010 to 2016, is published in the journal Nature Geoscience. "What we're able to do now with Cryosat is put the behaviour of retreating glaciers in a much wider context," said Dr Hannes Konrad from the University of Leeds, UK. "Our method for monitoring grounding lines requires a lot of data but it means you could now basically build a permanent service to monitor the state of the edges of the continent," he told BBC News. Although the end product is quite simple, the process of getting to it is quite a complex one. Viewed from above, the position of grounding lines is not always obvious. The glaciers themselves are hundreds of metres thick, and where they begin to float as they come off the continent can be hard to discern in simple satellite images. But there are radar techniques that can find their location by spotting the up and down tidal movement of a glacier's floating ice. This, however, is just a snapshot in time. What Dr Konrad and colleagues have done is use these known positions and then combine the data with knowledge about the shape of the underlying rock bed and changes in the height of the glaciers' surface to track the evolving status of the grounding lines through time. Fiji's prime minister has said the Pacific island nation is in "a fight for survival" as climate change brings "almost constant" deadly cyclones. Frank Bainimarama said Fiji had entered a "frightening new era" of extreme weather that needed to be confronted. His comments came after Cyclone Josie caused deaths and flooding on Fiji's main island, Viti Levu, at the weekend. In 2016, a cyclone hit Fiji leaving 44 people dead and wiping out one-third of the nation's economic production. Four people have died in severe flooding caused by Cyclone Josie, according to Reuters news agency. "We are now at an almost constant level of threat from these extreme weather events," Mr Bainimarama said on Tuesday, adding that powerful cyclones in the region were becoming "more severe" as a result of climate change. "We need to get the message out loud and clear to the entire world about the absolute need to confront this crisis head on," he said. "As a nation we are starting to build our resilience in response to the frightening new era that is upon us," he added. Last November, Mr Bainimarama took a leading role at the UN's Climate Conference in the German city of Bonn. The planet's climate has constantly been changing over time. However, the current period of warming is occurring more rapidly than many past events. The changes could drive freshwater shortages, bring sweeping changes in food production conditions, and increase the number of deaths from floods, storms, heat waves and droughts. Climate change is expected to increase the frequency of extreme weather events - though linking any single event to global warming is complicated. When engineer Lukasz Cejrowski finally saw the world's largest wind turbine blades installed on a prototype tower in 2016, he stood in front of it and took a selfie. Obviously. "It was amazing," he says, recalling the moment with a laugh. "The feeling of happiness - 'Yes, it works, it's mounted.'" Those blades, made by Danish firm LM Wind Power, were a record-breaking 88.4m (290ft) long - bigger than the wingspan of an Airbus A380, or nearly the length of two Olympic-sized swimming pools. The swept area of such a mammoth rotor blade would cover Rome's Colosseum. But things move quickly in the wind turbine industry. In just a few years, those blades could be surpassed by the company's next project - 107m-long blades. LM Wind Power is owned by global engineering firm General Electric (GE), which announced in March that it hopes to develop a giant 12MW (megawatt) wind turbine by the year 2020. A single turbine this size, standing 260m tall, could produce enough electricity to power 16,000 households. The world's current largest wind turbine is a third less powerful than that, generating 8MW. Various companies, including Siemens, are working on turbines around the 10MW mark. When it comes to wind turbines, it seems, size matters. This is because bigger turbines capture more wind energy and do so at greater altitudes, where wind production is more consistent. But designing and manufacturing blades of this size is a significant feat of engineering. Mr Cejrowski says that the firm could in theory use metal, but the blades would be extremely expensive and heavy. Instead, they use a mix of carbon and glass fibre. Environmentalist Mark Lynas, who once destroyed GM crops and then made headlines by ending his opposition, is stepping up his call for reason to triumph. Pro-science types, when they lambast those who campaign against genetically modified crops, often point out that no one has ever been harmed by the food produced from them. After 3 trillion meals, they insist, nobody has credibly reported even so much as a headache. August bodies – from the US National Academy of Sciences to the UK’s Royal Society – all agree that food from genetically modified organisms (GMOs) is as safe as any other. Perhaps I am the first person, therefore, harmed by dealings with a GMO. During an hour spent recently examining the performance of genetically engineered maize in a “confined field trial” near Kampala in Uganda, I received quite a severe sunburn. The maize itself looked impressive, however. Carrying an insect-resistance gene called Bt, it was clearly able to fend off pests better than the neighbouring non-GM equivalents, which were riddled with holes, much shorter and carrying smaller cobs. While there, I spoke to a local farmer called Lule Monica. Also a council leader, Monica told me she was “praying” for the day when the genetically modified maize being trialled in the research station would be available to farmers like her. She is concerned about a pest called fall armyworm that has invaded maize crops in Uganda and elsewhere in East Africa, and farmers are struggling. The Bt maize would help. Produced under the banner of the Gates Foundation-funded philanthropic Water Efficient Maize for Africa partnership, it also carries a drought-tolerance trait to help resist the worsening impacts of climate change. Fish and seafood are normally fairly environmentally friendly, but it takes so much fuel to catch some species that their carbon footprint is as big as that of red meat. WILD-caught seafood is usually an environmentally friendly thing to eat. But a few species have greenhouse-gas footprints as large as that of beef. Because those high-footprint species are growing in popularity, greenhouse gas emissions from the world’s fisheries have risen sharply over the past two decades. The extra effort needed to catch depleted species is also contributing to the rise. Robert Parker at the University of British Columbia in Vancouver, Canada, and his colleagues pulled together country-by-country data for fisheries catches. They combined this with best estimates of fuel use for each class of fishery. Because fuel accounts for the vast majority of greenhouse gas emissions from fishing, they could calculate the total carbon footprint for each fishery. Globally, they found that carbon emissions from fisheries rose by 28 per cent between 1991 and 2011, even though total catch has barely changed. That contrasts with other foods, where improved efficiency has led to lower emissions per kilogram of product. One reason is that people are eating more shrimp and lobster, both of which emit a lot per kilogram, comparable to beef. Most other fish are good choices for a climate-friendly diet. “The typical fish product is going to have a similar footprint to chicken, which is the most efficient land-based animal source,” says Parker. Some small fish such as anchovies do even better. The team is now developing a website where people can look up the greenhouse gas footprints of different seafood. French farmer, Bernard Poujol, believes ducks are the future for rice farms, but he hasn't quite perfected his technique. Gallup's annual Environment survey yields two broad conclusions about Americans' views on the U.S. energy situation. First, Americans' concern about energy, based on multiple measures, is at or near its lowest level in two decades or more. Second, Americans continue to voice preferences for environmental protection, energy conservation and developing alternative energy over producing more traditional energy supplies. Twenty-five percent of Americans say they worry "a great deal" about the availability and affordability of energy -- a new low in Gallup's 18-year trend, though not substantially lower than the readings in 2003 and 2015 through 2017.
2019-04-20T16:31:45Z
http://www.siouxfallsscientists.com/global-warming-news-2018-04.html
She is beautiful, lithe and swift: as deadly as the blade flashing in her deft grip. The blood of kings runs strong in her veins---but her weakling brother wears the crown. She is Bronwyn. And her name strikes fear in the hearts of the depraved courtiers feasting like jackals on the corpse of her father’s kingdom. Her brother may rule the land, but a ruthless maniac is the puppet master behind the throne. And he has put a price on the head of the fugitive princess, who alone knows the secret to his power. To save her kingdom, Bronwyn must enlist a rebel force of gypsies and giants, peasants and pirates, montebanks and changeling spies... Volume One of a four-book series. If it had not been for the strong arm of Thud Mollockle, the Princess Bronwyn would today be languishing in some unknown and mossy dungeon, had she been allowed to remain alive at all. Thud Mollockle was a sarcophagus-maker for a stonecutting firm in the Transmoltus district of Blavek. He worked without assistance in a large, low-ceilinged ground-floor room. Directly above him were the studios of the more skilled stone carvers, who worked on Church and private commissions. They provided more than a third of the city's architectural decorations: caryatids, capitals, friezes, pediments, cherubim and seraphim, urns, bas-reliefs and portrait busts, among many other standard and commissioned items. Dust---fine, white and talc-like---filtered through the wide spaces between the boards that formed the stone carvers' floor and Thud's ceiling. When the afternoon or morning sun beamed in through either the southeast or southwest windows, this ever-present lithic miasma became illuminated in a milky glow that made it almost impossible to see one end of the shop from the other. Thud was probably doomed to a lingering death from silicosis since he had begun working at the shop at the age of six and was now thirty-two. Still, he just as certainly would have thought it unnatural to breathe an atmosphere composed of anything other than ten percent oxygen, fifty percent nitrogen and forty percent marble. All day long, Thud could hear above his head the ceaseless, fussy tink, tink, tink of sharp steel chisels. Thud's workroom was occupied by Thud and perhaps a dozen rectangular blocks of stone the color of oxidized potatoes. These averaged about four feet thick, about the same in width, and five or six feet in length. It was Thud's job to hollow them out. When finished, each solid mass of stone had been transformed into something resembling a deep, uncomfortable-looking bathtub. These were sent upstairs to the stone carving departments where it became the job of junior stone carvers to decorate the four sides of the sarcophagi with appropriately funereal decorations. Meanwhile, the great slabs of the lids were being prepared elsewhere---a small subdepartment of the firm being devoted to just this one product for which there is a known and steady demand. Eventually some merchant or politician would have his mortal remains sealed within one of these stone cocoons, where it would safely molder away, decomposing decently out of sight and memory. What became of his vast stone basins is a question that very seldom troubled Thud, whose skull resembled his raw material to a striking degree; in density, at least, if not in form. The stoneworking firm of Groontocker and Peen was never, during working hours, a particularly quiet place. An ancient frame building that filled an entire irregularly shaped city block, it was divided into five floors or, rather, for the most part, vast open lofts. More than one hundred and twenty artisans worked in them, and not one of them worked at anything that did not make noise. Mallets struck chisels, chisels struck stone, stone struck floor, rasps abraded marble and granite winches and pulleys screeched under their massive loads, drills bored holes into resisting stone with a sound that made teeth whimper in empathy. To all of this the old building vibrated sympathetically like the sounding box of a guitar; all day long it hummed and throbbed and groaned. Nevertheless, as the Transmoltus is a district of industry, Groontocker and Peen was a comparative island of serenity; try as it might, its contribution to the general din was almost negligible. The building of the stone working firm was crowded by its neighbors, vast and ancient piles of frame or brick or stone all stained alike by the oleaginous soot that made the atmosphere of the Transmoltus unique. Only lightless cobbled paths snaked between the crowded buildings. These were filled with a chaos of carts and vans; trolleys and trucks; people with baskets, boxes and bundles. The sound had nowhere to go: the squealing of axles, the rumble of iron tires on stone, and the shouting of peddlers, merchants and angry drivers who should have known better than to be in a hurry in the Transmoltus in the first place. From the surrounding buildings came unearthly noises; few of them identifiable except by the very knowledgeable or excessively imaginative, but all of them unpleasant. High, sustained shrieks that made one's head feel as though it were being threaded on an endless steel wire; bass moans that made the lower intestine shudder weirdly; and resonating bongs that sounded as though boilers were being dropped a full story onto hard ground---which, in this instance, is just what was happening. Directly opposite the one large open window in Thud's workshop that faced the main thoroughfare were the open windows of a vitally active, mechanized factory: the belts that ran the great heckling machines within screamed and buzzed and occasionally cracked like whips. Thud had no idea what heckling machines actually made, other than noise. In fact, Thud did not even know that there were such things as heckling machines. However, all of this unholy, brain numbing, bone rattling din was only a background murmur to anyone who had grown up amid it. Therefore Thud was no more consciously aware of the stormy sea of vibration that washed over him, like a tsunami over an innocent tropical islet, than any one of us might be of the ticking of a clock, the song of the cricket or bird, or the beating of our own hearts. All of which went to show why it was not surprising that Thud heard the princess. Or, rather, he heard the armed men who were close behind her. Shouting and a single pistol shot drew Thud away from his work and to the open window. Just as he thrust his head into the full sonic fury of the heckling machines, he saw a girl turn the corner and pause just below him. She looked like one of the rats that Thud occasionally cornered in his shop: completely out of wind, head twitching side to side, looking for an escape that doesn't exist. The girl was cornered just as effectively as one of the rats, too. On her right was the vast, unbroken wall of Groontocker and Peen; ahead she was faced by the equally unbroken wall of the factory that housed the mysterious heckling machines. Unbroken at least so far as the girl's immediate needs required, since the only windows were far above her head. To her left was the entrance to the alleyway from which she had just appeared. The streets in both directions were plugged by a nearly impassable log jam of human bodies and vehicles. Once caught in that writhing mass she would be ground to dust, like a stone in a lapidary's tumbler. Thud would have been hard put to explain his next action, as he would have been hard put to explain anything he did. Thud was a creature of action, if absolutely necessary, not one of introspection, which was never necessary. He certainly wasn't moved by the girl's appearance, since all he could see of her was the top of her head. Perhaps the ovoid mat of hair reminded him of a little animal. He always felt sorry whenever he trapped one of the rats that infested his shop: he hated the way it looked at him just before he hit it with his mallet. He was always tempted to let the rat go, though he didn't dare or it would bite his ankles and steal his already meager lunch. He felt much the same way about the creature he saw beneath his window---and here was a chance to make amends to hundreds of mashed rats. "Hey! You" he shouted down at her. She looked up with a twitch like a startled cat and saw dangling before her a knotted, brown, ropy, hairy, scarred thing: something like a tree root and something like a big sausage: Thud's arm. The girl, without a moment's hesitation, did as she was asked and was whisked into the window as though she had been a handkerchief Thud had been waving at some departing friend, if he had had any friends, that is. Thud now saw what lay below the thatch of hair he had been staring down upon only a moment before. It was indeed atop a girl, as he had suspected. Immediately below the hair, which was straight, shoulder length and colored a dark auburn, like oiled mahogany, was a face lean from fright and exhaustion. Large, wide-spaced, bottle green eyes, almost imperceptibly slanted, were dilated from fright and the sudden darkness. They were framed by rather thick, peaked eyebrows, much darker than her hair, each as elegant and eloquent as a calligraphic brush stroke. Her face, initially red from exertion, was now taking on an equally unnatural parchment-like tint. It was angular, with very prominent cheekbones slanting toward the corners of her wide mouth. The face looked to Thud like one of the cold alabaster busts from the third floor. Her nose was long, rather thin and more convex than straight. A raptor's nose. She looked older than she was, though there is no way then Thud could have known that. In fact, she was young: seventeen or perhaps eighteen. Although she was far above average in height, Thud would never have described her as tall; from his mountainous viewpoint, everyone was short. Her legs were coltish, comprising more than half of her not inconsiderable altitude. She was slim-hipped, small-breasted and rather snakily lithe-looking. She clutched a battered leather satchel to her chest, held in place there by a stout strap that crossed her chest diagonally. She was wearing a long-sleeved, ankle-length dress of a fine cloth that, though torn and bedraggled, still looked more elegant than anything Thud had ever seen. On the other hand, what had suddenly appeared before the girl she did not recognize immediately as something human, and she was probably more than half-right in that. Something more like a bull, perhaps; it was very bull-like, though it had something bear-like about it, too. As well as ox-like, or even like a walrus a bit, but a lot more like a gorilla. With a sudden inspiration, she thought of an enormous loaf of bread with clothes on. She imagined such a loaf over seven feet high, with four additional loaves for arms and legs; she thought that, in color, texture, shape and general overall impression, the simile was fairly accurate. A muffin was balanced somewhere near the top: a currant muffin, since a pair of black dots were staring at her. The baked goods man image was so perfect she almost couldn't believe it when the muffin spoke to her. "Somebody chasing you?" it asked, and the girl could only nod. "Somebody real bad? You scared?" She nodded again and jumped away from the window with a gasp, as the voices of her pursuers reached her. They were rough, supercilious voices and they were commanding the street people to tell them where the girl had gone. But in the few seconds that had passed, the people in the alleyway had been completely replaced: they weren't the same ones who had been there when a girl had been lifted through the window by an arm like a tree root (or loaf of braided bread). "They're coming!" she cried, and there followed the sounds of crashing doors, stamping feet and muffled shouts of command and protest, vindicating the accuracy of her observation. "All right, then," Thud said, pointing to a spot on the floor. "Curl up there, like a ball." "What?" said the girl curtly, not understanding and, even in the throes of a precarious situation, finding herself bristling at being given such a peremptory order. She was not used to being commanded. "Hurry! Sit down there and curl up like a ball." Puzzled, she sat on the snowy floor and hugged her knees. She did as she is told in spite of her lack of practice and inclination for following directions, though she could not have understood why. Since the big man was not shouting "Here she is!" or trying to seize her, which latter he could have done with one enormous hand, he couldn't be up to anything much worse, she decided. She watched with amazement as the huge creature picked up one of the massive sarcophagi, seemingly without effort, and lumbered over to where she sat. "Be real quiet, now!" he ordered, and before she can so much as offer a surprised syllable, upturned the stone coffin over her. The girl was plunged into profound darkness before she even realized what the big man's intention was. She felt trapped, as one of Thud's doomed rats might feel if a bowl were to be dropped over it. Panic reared up in her, its waxen, sweaty face urging her to become hysterical. Only a second ago, she had been out in the bright, noisy street, running for her life. Now she was caught in a trap as dark and silent as a nightmare. The transition was bewildering and disorienting. Had she been offered the two as options, she would have considered them fairly well matched choices. What if the monster leaves me here? She could never lift the block of stone on her own and there must only be a few minutes' worth of air within its cavity. What if he forgets about me? He hadn't looked very bright at all. What if they arrest him, and take him away before he can tell them where I am? What if they shoot him, and he dies with my hiding place bubbling on his bloody lips? How many months or years would it be before someone decides to move the big block of marble, discovering to their horror and mystification my decomposed or even mummified body? She visualized herself transformed into a kind of monkey-like caricature sculpted in jerky. Perhaps, she thought, it would be a fate better than the one that awaits me otherwise. Her head was pressed against the bottom of the sarcophagus, now her roof, which was barely wide enough for her broad shoulders. Her prison was roomy enough longitudinally, however, so that she could stretch out her long legs, and she leaned against the back wall. The stone was cool and moist and she gratefully pressed her face against it. It smelled cool and earthy, like fresh mushrooms. She rolled her head and when her ear came into contact with the stone, she could hear voices. Simultaneously, she felt the floorboards vibrating with heavy footsteps. The voices were muffled, but they were those of the Guards, for sure. She recognized the imperious, condescending tones. They were demanding that the big man produce the girl. The big man asked, What girl? There's a girl hiding somewhere in this building, the Guard replies. She heard the big man starting to chisel on a stone. Look, have you seen anyone? The girl! The girl! Have you seen her? What girl? shouted the Guard. The girl hiding in this building! Where? rejoined the big man, anxious to help. The Guard called him an idiot and ordered his men to search the room. There was a great deal of noise, which ceased presently. The floor vibrated again from the weight of the armed men, followed by a long silence. Long enough that the girl begain to have renewed fears about suffocation and entombment---and imagination though it might be, it was becoming extremely difficult to breathe. The sarcophagus gave a groan and a line of light suddenly appeared where stone had met floor. It was dazzling to the girl's eyes and she squeezed them shut against the pain of rapidly constricting irises. When she opened them a second later, the great block of stone was gone from around her and in the arms of Thud, who was setting it down a few feet away, with a thud of its own. It was only at that moment that the girl fully realized what the big man had done: overturned, the hollowed-out block looked exactly like the solid, unworked cubes that ponderously littered the room. It would have strained anyone's imagination to have suspected that a girl was inside one of them and the Guards notoriously lack that useful mental faculty. It did not occur to her (and least of all to Thud) that the ploy had been a pretty astute one for someone of Thud's obvious mental limitations. The big man turned to her with his forefinger upraised to his lips in the sign for silence. He crouched down near her, folding up on invisible joints like a collapsing blimp (or a failed soufflé, to maintain the earlier culinary simile). Seen from a distance of only three feet or so, Thud's face was a marvel to the girl. She had never seen anything even vaguely similar to it; what amazed her was the gradual realization that she liked it. The head was as smooth, round and featureless as a mushroom; the mouth a slit so wide that the entire top of the head threatened to hinge over backwards whenever he grinned---an action that served to expose a pink cavern full of gnarled yellow stalactites and stalagmites, behind which lurked a restless, scarlet tongue, like a fretful blindworm. His eyes, as socketless as a mole's, were bright black beads nearly a hand's span apart. Roughly between them was a kind of lump that might have been a nose or might have been a wart. Thud by all rights ought to have been monumentally ugly, though he wasn't. It is difficult to explain why, and, in all the time to come that she was to know him, the girl certainly never even considered trying; but perhaps it was because the face radiated an uncomplicated kindness the way a burning coal fills a hearth with warmth. That was one possibility at any rate. "Please," whispered Thud, "be very quiet. Those men are looking for you, aren't they?" "What'd you do that made them so mad?" "I didn't do anything!" she lied. "Oh, sure, they might be back, all right," agreed Thud. "But I can't leave, either." "They'd spot you in a minute," agreed Thud. "I have to get out of the city! I must!" There was a silence between them, since there wasn't much that could be said after that; the conversation was going nowhere. The girl's sadness lacerated Thud's heart; he had no idea what he could do to relieve it. His great hands wrestled with one another, like a pair of small dogs roughhousing. The rough skin sounded like millstones grinding. The girl looked up at him with eyes that were like the peal of bronze bells. An idea squirmed its way to the forefront of Thud's consciousness, where his mind's eye blinked at it in unexpected and unfamiliar realization of his genius. "I can get you out of the city." "I can't tell you. I'll have to show you." "It's a way I used when I was a kid"---and the girl found it impossible to create a mental image of the giant as a child; the picture leaning more toward something more like a pupae than anything human---"but you've got to get out of here first." Thud rose to his feet and went over to the big window with the kind of ponderous grace a cow affects, and occasionally achieves. He leaned out over the sill and gave a good, long, hard look in both directions and then returned to the girl, still with the unhurried deliberateness of the truly bovine. "There're Guards in the street. They're likely all over. If I can get you out of here, I can get you home. Then it'll be easy." From his workbench, Thud selected a large chisel, nearly as long as the girl's arm, with a blade as sharp as a razor. It winked at her conspiratorially in the window light. Thud jammed its cutting edge into a chink between two of the wide floorboards and bore down upon the opposite end. The board lifted with a protesting screech. Moving down its length, he repeated the action two or three times until an entire ten-foot long slab of thick lumber had been pried from its moorings. Rusty spikes hung down from the moldy underside like miniature stalactites. A hundred annoyed spiders dropped from it and scurried for cover. Thud got down on his hands and knees and dropped his head into the rectangular hole. He looked up and said, "Come on, this way!" as spider webs floated from his face and a small insect, panicked, disappeared over the curve of his head like an arthropodal Magellan. The girl looked into the hole with extreme distaste. She imagined a thousand unwinking little eyes gazing back at her. "In there?" she asked, unnecessarily. "Please," the giant begged. She reminded herself that there were, probably, worse alternatives. She had seen some of them. In fact, it was just because she knew there were worse things than a damp hole, slimy with grey fungus and alive with invertebrate things that she climbed down into the darkness after only that single moment's hesitation. She left shreds of her rumpled dress festooning the splintery edges of the narrow slot, which was perhaps just an inch less wide than it should have been. Whether or not she thought of it, the girl should have been grateful for her slender, boyish silhouette. The earth was only about three feet below the floor. It was covered with a kind of grey-green gruel of mud, decomposing wood and the dust of limestone and marble. Her feet sunk into it until her ankles were buried. It sucked at her feet when she tried to lift them. She looked up at the gargoyle face that hovered over her head, round and pale and grinning like a moon. "Turn around and crawl," it said. "You'll see a little bit of light. Head for that. When you get to it, stay there. Wait for me." "Hurry!" he urged, and she had to duck as he replaced the plank over her head. A dozen sharp blows rained dust and crawling things down upon her as Thud reset the nails. It was absolutely dark, with the exception of the thin lines of light between the floorboards. They receded from her in either direction like an elementary exercise in perspective. They striped the contours her body so that she looked like a topographic map of a teenage girl. She turned around and saw the patch of grimy looking daylight that Thud had mentioned. It looked about a mile away. She made certain her leather bag was strapped tightly across her chest and begain crawling on all fours toward the glimmer. The glutinous slime covered her legs halfway up her thighs and, worse, up to her elbows. Each time her hands sank into the sediment she could feel it writhe around her fingers. Her dress, already ragged and now also damp, clung to her like papier mâché. Half sliding, half crawling, accompanied by sounds very much like a cow sucking on its cud, she made her way toward her goal, such as it was. The squarish satchel on her chest acted like an anchor, dragging in the slime, dredging up malodorous bubbles that burst flatulently beneath her nose. She discovered that the light was an opening between the floor level of the building and the cobbled alleyway. The opening had been created to act as an outlet for the drainage of moisture from beneath the building. It was working as intended and a stream of tepid, mucus-streaked fluid leaked from the opening; it then flowed over the cobbles into the central channel that drained the alley. She tried to straddle the flow, but it ran over one ankle and a hand, and within inches of her nose, which, not for the first time in her life, she wished had not been so long. She tried not to think of any of the several possible sources of the liquid. Keeping her head within the shadow of the hole, she peered as far into the street as she dared. One black-uniformed Guard stood at the entrance to the main thoroughfare to her right (the street Thud's large window overlooked), and another Guard had just turned the corner to her left, walking in her direction. She withdrew further into the darkness. The Guard, attracted perhaps by a hint of movement, a shadow within a shadow, or noticing a possible hiding place previously overlooked, came toward her. She had no place to go where he wouldn't be able to see her if he bent down and looked into the opening. The Guard approached within a few feet, drew his saber, and begain to squat on his haunches, turning his head so he could see into the hole, trying to minimize his proximity to the fetid drool issuing from it. The girl felt her stomach wrench with the expectation of immediate capture when something warm and furry scuttled over her legs with icy little feet. A rat the size of a pampered house cat brushed under her nose, its cold, naked tail giving her lips a snide fillip as it headed for the street. It ran out between the Guard's legs. He leaped erect with a cry of disgust and struck at the rat with his blade, drawing a spatter of sparks from the pavement---but the animal disappeared into the jungle of crates, ashcans and garbage with a supercilious chuckle. The Guard flung a curse at the vanished animal and continued on his way. The girl thanked Musrum for rats. Again a shadow fell over the opening and again she shrank from it. This time a familiar voice husked, "Girl? Are you there?" She cautiously poked her head into the open air. Thud stood there, towering over her like a captive balloon. He held a large stained canvas bag in such a way that it shielded the girl's hiding place from the two Guards at the end of the alley. He was busily picking up bits of broken wood and tossing them into the bag. "They already checked the bag. They think I'm just getting firewood." The girl crawled out of the hole and into the protective screen created by the bag. Thud casually bent to wrench a slat from the side of a fruit crate. As he placed his foot against the box to brace it, he let the near edge of the bag drop free. It fell to the cobbles, making a yard-wide circular opening. From the point of view of the Guards, the bag remained unchanged. The girl needed no prompting to catch onto the idea and scuttled into the bag instantly. Thud tossed the broken wood on top of her and moved on down the street. The entire act had taken but a moment and there had not been even a second's suspicious hesitation in Thud's movements. He stopped twice more, piling more scrap into his sack for realism's sake, waved to the Guards, who good humoredly waved back at the enormous half wit, and disappeared around the corner. The next ten minutes were not the most unpleasant the girl had ever experienced---little, she suspected, could be nastier than the crawl through the darkness under the stone cutters'. They were, however, more painful. Thud was being overzealously conscientious in his attempt at appearing casual and tossed in the firewood with an abandon that left the girl with more than one bruise and abrasion. Now as he strode along with the bag hanging against his back, the girl wondered if it would ever be possible to sort herself out from the scrap. The contents of the bag were being stirred into a kind of aggregate girlumber. She was almost upside down, knees pressed to her nose; the bag, none too roomy, squeezed her like a small but ambitious boa constrictor digesting a large bunny. Soon the character of the bouncing changed and she guessed that they were ascending a flight of stairs. Several flights, from the time that passed. The bag slammed against a wall, first on the right and then on the left, and the girl hazarded a protesting kick into the small of Thud's back, but to little avail. The jouncing eventually stopped, there was a rasping squeal, another jostle, a blow against the back of her head and the bag was set onto a floor with a thump that jarred her teeth. She looked out of its opening in time to see the big man closing the door through which they had just passed. When he turns, he sees her tumble from the sack, all akimbo. "Are you all right?" he asks. Her first thought is to say "No," an answer for which her bruises, scratches and imbedded splinters argue persuasively. But she saw that the ugly man was in earnest; he hadn't asked casually: he was truly concerned. To reply in the negative would have been cruel; petty as well, since she was alive and that certainly was all right. What are bruises compared to what she knows could have happened to her had the Guards captured her and taken her back home? "Yes, I'm fine!" she answered, gladly, pleased when she saw the worry wiped from his face by one of his astonishing grins. The room in which she found herself was obviously the big man's home. It was no larger than a big closet, perhaps ten feet by twelve, which left little enough room to spare when the big man was at home, which was the case. There was not a right angle in it; the ceiling and walls sloped together into compound angles that made the girl guess, correctly as it happened, that the room was tucked into the attic of a building. Thick wooden beams criss-crossed through it, emerging from the walls, disappearing into the ceiling. The walls had been once plastered but that had mostly fallen off, leaving leprous, lath-boned holes. Thud had attempted to improve on the dreary appearance that gave his home by pasting over the holes with woodcuts and chromolithographs torn from the illustrated papers. He was pleased, but this effort really only succeeded in making the room look shabbier, possibly because the woodcuts were never quite the right size or shape to entirely cover the plasterless craters. No matter. The floor's planks were bare but very clean. In one corner was Thud's bed: a pair of large canvas bags, like the one he had carried the girl in, sewn mouth to mouth and filled with straw. A plain little table and a chair to match (which latter seemed altogether incapable of dealing with Thud's immense behind) completed the major furnishings. What little else there was is quickly listed: a curtain over the single window, washed and scrubbed to colorlessness and near transparency, a small wood-burning stove made from a discarded iron keg (in which Thud was now starting a fire); a wooden crate nailed to a wall that acted as both cupboard and pantry; a little oil stove on the table, next to a cracked, handleless cup filled with dirt from which sprung a twiglike plant with a single leaf, and, centered on one of the trapezoidal walls, a lone tintype photograph, surrounded by pictures of flowers, some of them gaudy chromos torn from magazines and seed catalogs, others laboriously hand-colored. The silvery picture was a portrait of a pretty, thin-faced girl who looks not very much older than Thud's foundling---except for the sad eyes; those looked very old. This made the girl think of her own appearance, and she looked down at herself in despair. Her dress was plastered to her body by mud and filth; it was as heavy and clammy as if it had been made of clay. It was ragged, one sleeve gone altogether, and huge rents were torn down either side. The petticoats beneath made a solid, sodden mass. She had only one shoe. She touched her hair and wanted to cry: it felt like cold boiled spinach. Thud was busy at the little table. He had pumped up the pressure in the oil stove and it was now topped with a hissing blue flame. He was filling a battered tin pot with water from an unglazed ceramic jug. He had opened some cans and small packets. "You want to eat? I can make some hot tea, if you'd like." "Yes! And I want some of that water. I've got to wash my face." "Sure, here. You want to clean up? You want more hot water?" "That'd be wonderful! I'll be able to think clearly once I've gotten some of this filth off me," she said, scrubbing at her face with the offered cup of plain water and the piece of coarse cloth that came with it. "Me? Oh. My name's Thud. Mollockle. Thud Mollockle." "It's a pleasure to have met you, Mr. Mollockle. My name is...Bronwyn." "I am pleased to know you, too, Miss Bronwyn." The room was quickly warming up, for which she was grateful; she wrapped herself in the threadbare blanket Thud handed her. "I'm afraid that I've gotten you into a lot of trouble, Mr. Mollockle. Small enough thanks for saving my life, I suppose." "Me?" He seemed to have continuous difficulty believing that anyone would address him personally. "No, no trouble. You needed help. And I hate the Guards." Bronwyn looked at him sharply, surprised and interested in the sudden bitterness with which the otherwise placid man had spoken those last five words. He seemed to sense the alteration in the girl's attention. It embarrassed him. "I'll get you that water for your bath---you must feel terrible. There's hot tea right there. And some food. Please, help yourself; I'll be right back." And before Bronwyn could say another word, he was gone. The door had opened and shut so quickly it had barely been able to utter a surprised "Eek!" She stepped over to the table and suddenly realized how weak she was. Her legs felt wobbly and she nearly collapsed like a stringectomied marionette; a wave of vertigo swept over her, leaving her eyes momentarily unfocused. Her wet clothing felt unbearably repulsive---and she was suddenly freezing in spite of the warmth of the room. She unfastened the dress with shaking fingers, losing half a dozen buttons in the process. The garment, its fine fabric not ever intended for such uncouth abuse, peeled away from her body like the skin of a scalded tomato. She kicked the mass into a corner, rewrapped herself in her blanket and fell gratefully into the chair. She picked up the thick mug of steaming tea; it was like cupping a kitten in her hands. She held it up to her face and let the fragrant vapor caress her cheeks, nose and eyes. The heat made her nose start to run. When Thud returned, she was eating one of his fat, stale soda crackers and a slice of potted meat. He was carrying a pair of enormous buckets, each holding at least ten or fifteen gallons of steaming water, as easily a milkmaid. He set them heavily on the floor and said, "I'll be right back." A moment later, there came sounds like the bonging of a giant cowbell from beyond the door, which burst open revealing the vast dorsal view of Thud. He backed into the room, pulling in after him a battered tin tub. Dropping it with a resonant clang in the middle of the room, he circled it to close the door. There was now not a square inch of floor left unaccounted for. Still without a word, he poured the contents of the buckets into the tub. The water was still so hot it fizzed as it splashed onto the metal. "You had better take your bath while the water's hot," Thud said. "It'll get cold real quick." Bronwyn was taken aback for a moment, as she realized that Thud meant for her to take her bath right there and then. A chiding protest came to her lips but died there aborning as she looked into the ridiculous round face and saw nothing but kindness and a concern that was earnest and gentle. She had the unkind but perfectly natural thought that taking a bath in front of Thud would be not unlike taking a bath in front of a pet dog. Natural but, admittedly, probably quite accurate. She was suddenly overbrimming with fatigue and every bruise and muscle in her body suddenly gave a single agonizing throb in unison. She stood up from the chair and took but one step toward the tub before she started to topple. Thud was beside her in an instant, supporting her by one hand with the firm gentleness that always seemed so impossible for him. With the other, he pulled away the blanket, an action done so casually that Bronwyn allowed the familiarity without a word of protest. He then slipped his free hand behind her knees and lifted her from the floor. She looked like a rag doll in the giant's arms. He lowered her into the tub. The water felt scalding at first and she cried out weakly. Thud ignored her; soon she felt as though she is dissolving like a block of dry ice into the steam that billowed around her. She could feel herself turning bright red as blood that had withdrawn deep within her rushed eagerly back into her skin. A hand, rough as leather, touched her shoulder and carefully pushed her forward, until her nose nearly touched the water. Using handfuls of crude soap scooped from a wooden bowl, Thud begain scrubbing her body. In her previous life, Bronwyn would rather have died than have anything put on her skin like this corrosive, abrasive substance. Now it felt like smooth, rich cream. But then anything would have felt better than the unspeakable filth and slime that covered her. Thud's soap was pungent and clean smelling. A day's worth of dirt washed from her, a day's worth of pain and many weeks of fear and anger. She felt herself drifting; the firm massaging was hypnotic. She felt safe and, for the first time in months, as though she might have some hope in carrying out her mission. "Hold your nose," Thud said, simultaneously pushing her face under the surface of the water. The heat pressed against the lids of her sore eyes. She lifted her head and a corona of streamlets poured in a circle around her downturned face. Thud worked a handful of the raw soap into her hair. His thick fingers kneaded her scalp as though it were a ball of dough. It was exactly like washing the puppy he had once had in his now dream-like childhood. Bronwyn had long since passed into a kind of achronic reverie. She had no recollection of Thud lifting her from the bath, holding her by passing an arm behind her back while he rubbed her dry with a coarse, brown cloth until she was as pink as a shrimp, nor any consciousness of being wrapped in ragged, patched blankets until she looked like a fat, hand-rolled cigar, then laid so gently onto his straw pallet that it scarcely rustled. She had long since fallen asleep. Night had meanwhile fallen over the city of Blavek, and Thud had had to finish his work on Bronwyn by the light of a single tallow candle. When he was done, he carried the candle, not minding the molten pearls of wax that ran over his fingers, over to the tintype portrait, surrounded by its field of gaudy paper flowers. He looked for a long moment at the silvery face that seemed so alive in the flickering candlelight. Leaning forward slightly, he kissed it, just once, just so. Blowing out the candle, he crossed the lightless room. Only the grey square of the window relieved the darkness. He sat in the small wooden chair under the window, beside the table, and stared into the room for a long time before he, too, fell asleep. 1.- Chapter 1: The Rescue Nov. 14, 2017, 3:55 p.m.
2019-04-21T14:44:25Z
https://getinkspired.com/en/story/28226/a-company-of-heroes-book-one-the-stonecutter/?ref=home
Welcome to the fifth lesson ‘Project Time Management’ of the CAPM Tutorial, which is a part of the CAPM Certification Course offered by Simplilearn. In this lesson, we will focus on project time management. In the next section, let us take a quick look at the project management process map. There are 47 processes in project management grouped into ten Knowledge Areas and mapped to five Process Groups. In this lesson, we will look at the third knowledge area, i.e., Project Time Management and its processes. Let us begin with the first topic of this lesson, project time management. The purpose of project time management is to ensure that the projects get completed on time. This knowledge area is primarily concerned with developing a project schedule and ensuring that project goes as per the agreed schedule. If there is a need to change the project schedule, the change should happen by following a proper change control procedure. Another term used in the CAPM examination is schedule management plan. Schedule management plan is part of the project management plan and has information on the planned project schedule and its management and control. Let us discuss the key activities of project time management in the next section. It is important to identify a list of activities that would be a part of the project. Next, an estimation of time and resources required for completing the identified activities are done. Finally, these activities need to be sequenced as per the dependencies. In the next section, let us discuss project schedule. Project schedule represents the time dimension of the project plan and has information like when the project would start, when each of the project activities would happen, in what order the project activities would happen, when the project would be completed, etc. Usually, the software system is used to develop the project schedule. The project team can enter the list of activities in the software as well as their dependencies, and the software can produce the project schedule as the output. Microsoft Project is the most popular tool used for project schedule development. Generally, the project schedule is considered similar to project management plan. Project management plan is different from project schedule. Project management plan not only has information about the project schedule, but also other important project related plans, like risk management plan, cost management plan, etc. Let us focus on Gantt chart in the next section. Gantt chart is a type of bar chart that illustrates a project schedule. It shows the dependencies between the project activities as well as their percent completion. A sample Gantt chart is shown below. Two summary elements of the work breakdown structure are depicted. To complete those elements, there are a number of activities under them. Some of these activities have dependencies. For example, Activity B and C have a dependency. Activity C can start only when Activity B is completed. The chart gives you an idea of when specific activities are planned to finish and when the overall WBS element will get delivered. Create tasks and work with the Gantt chart. This will make answering Gantt chart based questions easy and fun. In the next section, let us understand the relationships that exist among project activities. Finish to Start: An activity must finish before the successor can start. For example, Activity B can start only when Activity A completes. Start to Start: An activity must start before the successor can start. For example, Activity B can start only when Activity A starts. Finish to Finish: An activity must finish before the successor can finish. For example, Activity A has to complete before Activity B can complete. Start to Finish: An activity must start before the successor can finish. For example, Activity A has to start before Activity B finishes. Out of these four types, the Finish to Start is the most commonly used and Start to Finish is the least used real-life projects. Along with these relationships, you also need to be aware of the “dependencies”. Let us look at the dependencies in the next section. There is a subtle difference between a dependency and a relationship. Dependency is how activities are interdependent in one way or the other. There are two ways of classifying dependencies. Classification 1 - The first classification is mandatory or discretionary. Mandatory dependencies cannot be passed by. For example, the foundation of a civil structure must be laid before working on the pillars and slabs. Discretionary dependencies, on the other hand, arise out of the preferences of the project team. For example, painting activities can be started only after all the electrical and plumbing work is done. Classification 2 - The other way of classifying dependencies is external or internal. External dependencies involve a third party or an entity outside the project team. For example, if a construction project is dependent on the approval of the structural design by a government authority, it becomes an external dependency. Internal dependencies are within the control of a project team. For example, the start of the slab work being dependent upon the availability of ready-mix concrete may be an internal dependency. In the next section, let us look into network diagram. A network diagram is extensively used in the project time management knowledge area to plot the activity dependencies. This is a graphical representation of the project activities in the form of a network. In Precedence Diagramming Model (PDM) or Activity on Node (AON), boxes represent activities and the arrows indicate the dependency. This type of network can have all four types of relationships between the activities. In Arrow Diagramming Model (ADM) or Activity on Arrow (AOA), the arrows represent activities. The relationships and sequence can be inferred from the direction of the arrows and linkages between the activities. In such types of network, only Finish to Start relationships can be shown. Such diagrams may need to make use of dummy activities to indicate some dependency between the activities. There may be questions in the CAPM exam, based on the Network Diagram. So create and work with the diagram. This will make answering Network diagram based questions easy and fun. In the next section, let us look at a network diagram. A sample network diagram is shown below. Activities A and C can happen in parallel. B and D require both A and C to complete, whereas E requires both B and D to complete. Activity on Arrow network diagram makes use of “Hammock Activities”. They are used to show a comprehensive summary activity combining several other activities underneath for control and reporting purposes. In the next section, let us look at few important terms in time management. Let us look at some of the key terms used in project time management. When a successor activity can start before the predecessor activity can complete, it is considered Lead. For example, you can start preparing the test environment 2 weeks before the development activity finishes. When the successor activity has to wait for few days after the predecessor activity has been completed, it is considered Lag. For example, one needs to wait for 2 days for the foundation to settle, before work on the pillars for the next floor starts. Rolling wave planning is an iterative planning technique in which the work to be accomplished in the near term is planned in detail, while the work in the future is planned at a higher level. It is a form of progressive elaboration. In the context of estimating techniques, analogous estimating is based on the previous project data. Therefore, if the last 5 similar projects took 6 months to complete, the next one will also take 6 months. This technique employs expert judgment. Another estimating technique is parametric estimating. This technique uses a mathematical model to calculate the projected time for an activity based on the historical records from previous projects and other information. Few common parameters are identified based on the previous project data and that parameter is used to predict the time required to complete the next activity or project. For example, you can normally complete 10 kilometers of highway construction lanes, a week. Effort is the total amount of work required to complete the activity. Duration is the amount of time it takes in terms of elapsed or calendar days. If you have an activity that requires 10 people to work for 5 days, the total effort is 50 person days but the duration is only 5 days. In the next section, let us look at the project time management processes. The first six processes are executed during the planning process group. The ultimate goal of these planning processes is to develop the project schedule. The seventh and the last process is a part of the monitoring and controlling process group. In the next few sections, let us discuss these processes in detail. We will begin with plan schedule management. As defined in PMBOK Guide, plan schedule management is the process of establishing the policies, procedures, and documentation for planning, developing, managing, executing, and controlling the project schedule. It belongs to the planning process group. Let us look at the inputs to this process. The project management plan provides other subsidiary plans and will guide the schedule planning activities on the project. The project charter provides an overall context and the high-level product and project description, which might help determine the approach for schedule management. Few projects might have scheduling constraint. Organizational process assets provide inputs such as policies and procedures, templates, past performance data and estimates, historical information, and knowledge base. Now, let us look at the tools and techniques employed in this process. Expert judgment refers to input received from knowledgeable and experienced resources. Experts can draw from their previous experiences the proper approach to govern the schedule on a project. Meetings may be organized to determine the schedule management plan. Anybody responsible for the project schedule management, such as the project manager, sponsor, customer, and other stakeholders must attend these meetings. Now, let us look at the outputs of this process. Schedule management plan is a component of the project management plan that describes the criteria and activities required to arrive at the project’s schedule, as well as how the project may be baselined, monitored, and controlled. In the next section, let us the define activities process. Define activities is the process of identifying the specific actions to be performed to produce the project deliverables. It belongs to the planning process group. The important input for the define activities process is the scope baseline. A reason why enterprise environmental factors is an input to define activities is that the organization might be using project management software to define activities and that may influence the activity definition process. The knowledge base containing historical information, regarding activities lists, used by previous similar projects is a good example of organizational process assets applied to scheduling. The tools and techniques used in defining activities are decompositions and rolling wave planning. The last technique is expert judgment, where the experience of project team members is used in developing detailed activity list. The output of the process is activity list, activity attributes, and milestone list. Activity list contains a list of identified activities. Activity attributes are the additional information about the activity itself. A milestone is a significant point or event in the project. A milestone list identifies all the milestones and indicates whether the milestone is mandatory or optional. Let us move on to the next process, sequence activities. Sequence activities is the process of identifying and documenting relationship among the project activities and is also part of the planning process group. Every activity and milestone, except the first and last one, is connected to at least one predecessor and one successor. All these are the outputs of the define activity process. The other inputs are schedule management plan and project scope statement. Schedule management plan provides guidance in terms of methodology to be employed for many of the scheduling activities on the project. Organization process assets are also an input to sequencing activities because the organization might have some kind of knowledge base for scheduling project activities. The enterprise environmental factors relevant to this process may be scheduling tools in use, project management information systems, work authorization systems, etc. One of the important tools and techniques used in sequencing activities is the precedence diagramming model. In this method, the activities are drawn on a network diagram and all the different kinds of dependencies between the activities are determined. While determining activity dependency, it is important to identify the type of relationship or dependency between the activities. The other technique is Leads and Lag, which is widely used for sequencing activities. The output of the process is the project schedule network diagram, which is a graphical representation of the project activities in a form of network, which also shows the activity dependencies. While designing the network diagram, new activities might be identified and that would result in some of the project document updates, especially the ones that list all the project activities. In the next section, let us look at the estimate activity resources process. After sequencing the activities, the next step in project time management is estimating the resources required to accomplish each of the identified activities. Estimate activity resources process also belongs to the planning process group. Here, resources do not mean only the human resources but include all other resources like equipment, raw materials, machinery, etc. Schedule management plan is the first input. Schedule management plan provides guidance in terms of methodology to be employed for many of the scheduling activities on the project. The other inputs to this process are an activity list and activity attributes that are the outputs of the time management processes. The risk register is another input. Risks to the project may influence the decisions about the resources that need to be deployed; hence risk register becomes an input to this process. In addition, activity cost estimates are another input. Cost and resource estimates on a project are interrelated and influence each other. For example, the cost might dictate the number and type of resources that can be employed or the time might dictate the cost that may need to be incurred. Along with this, the resource calendar is also an important input, because it has the information about the availability of each of the resources. Some of the enterprise environmental factors that can influence estimating activity resources are the availability of required resources within the organization. The organizational process assets is also an important input as the organization might have standard policies for staffing or for hiring contractors on the project. With all these inputs available for estimating activity resources, there are various techniques used for estimating the required resources. The first technique is using expert judgment. In this technique, an expert in resource planning and estimating can estimate each of the activities. The next technique used is alternative analysis. In this, the activities are analyzed to identify different ways of completing them. This is to ensure only the required resources are assigned to each of the activities. This helps in resource optimization. Along with the above two techniques, many organizations routinely publish their estimating data and this could be used in activity resource estimation. Another technique that is routinely used in activity estimation is bottom-up estimating, which is decomposing the activity further down to understand it in more detail and estimating at that level. Later, all such estimates are added to arrive at the estimate of the activity. In real projects, one has to use all of the above to estimate each of the activities. Sometimes the project management software also helps in estimating. The software estimates are based on the inputs provided to it. Software should only be considered as a supporting tool in estimation and never fully rely on its output. Clearly, the output of this process is the activity resource requirements. Along with this, resource breakdown structure is also prepared. The resource breakdown structure is the categorization of all the required resources in various categories, i.e., human resources, equipment, raw materials, etc. In the process of estimation, several other project documents may also be updated. For example, the resource estimates are directly correlated with cost estimates. Let us now move on to the on to the next process, estimate activity durations. The next process is to estimate the duration required to complete each of the activities. The duration estimation should be done by someone who is familiar with the project. For example, the same activity, if done by a highly skilled resource would take less time compared to a less skilled resource. This may vary regardless of the resource used due to the project requirements. In addition, the activity duration estimation should be updated continuously as you move ahead with the project. This is because as the project progresses, there will be more clarity on the project. The inputs of this process are similar to the ones in sequence activities process. For instance, it may contain information about the estimation techniques to be employed and the people who need to be involved in the estimation process. Activity resource requirements is also an input to this process because resources assigned to an activity would significantly affect the activity duration, i.e., the lower skilled resources would take more time than the highly skilled resources. The other inputs include activity list, activity attributes, and resource breakdown structure. Resource calendars are also an input to this process. The type and skill set of resources available to the team may have an impact on the time it would take to complete the activities. Risks to the project may influence the decisions about the time required to complete the activity, hence risk register becomes an input to this process. The project scope statement defines the constraints and assumptions affecting the project duration. An example of enterprise environmental factor that can affect duration is organization’s productivity metrics, which is collected based on the experiences of multiple projects. The last input in this process is organizational process assets. Now, let us look at the tools and techniques used for estimating activity durations. First is the expert judgment, which means using previous project experiences in estimating the current project duration. This can be used with other estimation techniques; and used to reconcile differences if different techniques result in different estimates. Three-point estimating is a method where three estimates are used instead of one. It is part of a project management philosophy known as Program Evaluation and Review Technique (PERT). Estimating activity durations is often done as a team exercise as each activity may require multiple skill sets to be applied. Therefore, it is important to use group decision-making techniques to arrive at a consensus or at least an estimate that is acceptable to all the team members. Reserve analysis adds buffer into the project schedule to deal with any uncertainty. The contingency reserve may be added as a percentage of the activity duration or fixed number of work periods. The other tools and techniques are analogous estimating and parametric estimating. The outputs of this process are the activity duration estimates and project documents updates. The activity duration estimates are represented in terms of the range of possible results. For example, 10 days plus minus 2 days: i.e., the activity would take minimum 8 days and maximum of 12 days. In the next section, let us look into a business scenario to understand this concept better. Jan, the EVP of the manufacturing division, has commissioned Jack to lead a project initiative in her area of responsibility because of his attention to detail. Jack is working with his team to estimate activity durations so they can map out the schedule for the project. After a successful decomposition process of the scope statement of work, Jack is confident in his team’s ability to capture the true work effort that needs to be estimated and scheduled. In reviewing the activities to be estimated, Jack realizes a large number of the activities could benefit from some historical data and the use of mathematical parameters. This minimizes his estimating risk for 60% of the activities. For the remaining activities, the team is able to research past practices and industry standards to come up with a range of estimates for the duration. What approach are Jack and his team likely to take to determine their estimates? Jack and team have decided to use parametric estimation technique for a large number of activities and for the remaining activities decided to use three-point estimate, which is also known as PERT. Let us now look into the next process, i.e., develop the schedule. Develop schedule is the process of analyzing activity sequences, durations, resource requirements, and schedule constraints to create the project schedule. It belongs to the planning process group. Generally, scheduling software is used for developing the project schedule. Entering the activities, durations, and resources into a scheduling tool generates a schedule with planned dates for completing project activities. Developing a project schedule is an iterative process. Revising and maintaining a realistic schedule is a task in itself and it continues throughout the project as the work progresses. Various tools and techniques are used to develop schedule process. Schedule network analysis is a technique that generates project schedule. and resource optimization techniques to create the project schedule. The other tools and techniques include leads and lags, schedule compression, and scheduling tool. Let us now look at the outputs of this process. The project schedule consists of a minimum planned start date and planned finish date for each activity. Although project schedule can be represented in tabular format, it is more often represented graphically using either bar charts or network diagrams or a combination of two. The final schedule which is the output of the develop schedule process is also called “schedule baseline”. Once the scheduled is baselined, it can be changed only through formal approvals. Meeting the schedule baseline is one of the measures of project success. Schedule data produced may include a number of resources, key milestones, etc. Project calendars specify the available working days and the number of shifts in each day. It indicates how many hours and days are available for the work of the project to be completed. Project management plan updates are a result of the develop schedule process. Many of the other subsidiary plans of the project plan may get impacted, which may include cost management plan, scope management plan, risk management plan, etc. This may also result in other project documents being updated. In the next few sections, let us discuss schedule network analysis techniques. It is essential to know if the required resources are available at that time, along with the time estimate of each of the activities. Since the schedule is calendar based, it helps in estimating the same. Schedule network analysis technique generates project schedule. Critical path method relies on determining the critical path on a project schedule. Critical chain method is a variant of the critical path method wherein the critical chain is determined based on the logical, resource, and other kinds of dependencies between the activities. What-if scenario analysis is about trying to vary a certain parameter to observe the impact on the schedule. For instance, you may want to check result, if you put in more resources on a particular activity to reduce its duration. Resource optimization techniques try to arrive at the optimal utilization of the resources used on a project. Ideally, you would want the resources to be fully utilized, but you would also want to build in sufficient buffers in case a certain resource is not available due to various reasons. In the next section, let us look at the Program Evaluation and Review Technique. The optimistic estimate, on the other hand, represents the amount of time an activity would take in the best-case scenario. Based on these three estimates, the expected duration of the estimate is calculated as per the formula given below. There is no question asked on variance, but the formula is important. It is because if the standard deviation of the whole project is to be calculated, the process is to calculate the variance of the whole project and then take its square root to calculate the standard deviation of the project. Concept-based questions on PERT can be expected in the CAPM exam. So make a note of the formulae while you prepare for the exam. In the next section, let us understand PERT with an example. Let us now figure out how we can apply the three-point estimation that PERT uses in order to draw some useful conclusions. Assume that the optimistic, pessimistic, and most likely estimates are 20, 70, and 30, respectively. Using these values, you can determine the expected duration and the standard deviation as indicated. Now, if the causes of variation are random, you can assume that the actual values would be evenly distributed about the mean, and will follow the normal distribution—sometimes referred to as the Bell curve. Further, you can use the properties of the normal distribution. There is 68% probability of the actual value falling within the first sigma from the mean, 95.4% probability for the second sigma, and 99.7% probability of the actual value falling within the third sigma. Extending this logic, the notion of Six Sigma is reaching a level of confidence that only 3.4 times out of a million would the actual value fall outside the stated range. PERT allows you to plan based on the intended “level of confidence” in the outcome and determine buffers accordingly. Let us discuss the critical path method in the next section. Critical path is defined as the longest duration path through a network diagram, which determines the shortest time to complete the project. Float can be considered as a buffer time available to complete an activity. Float is calculated once the network diagram is ready. It is also called as slack. There are three kinds of float. The first being the total float, which is the amount of time an activity can be delayed without delaying the project end date or an intermediary milestone. The second type of float is free float, which is the amount of time an activity can be delayed without delaying the early start date of its successor(s) activities. The last type is independent float, which is the amount of time an activity can be delayed if all the predecessors finish at their latest finish dates and you want to start all the immediate successors at their earliest start dates. The slack of the activities on the critical path is zero because there is no scope to delay activities on the critical path. Critical path actually represents the project duration. Delaying activities on the critical path is as good as delaying the project duration. Concept-based questions on critical path can be expected in the exam. So it is essential to have a clear understanding of the concept. In the next section, let us learn how to calculate float. Float of an activity can be calculated by two methods. However, the first step in critical path method is to identify the critical path of the network. Once the critical path is identified, follow the forward pass to find the early start and early finish for each activity. The float of the activities on the critical path is zero, so they represent the overall project duration. Use forward pass or backward pass to calculate the total activity time. Calculate late finish and late start using backward pass method. Note the total float formula before you start for the exam. In the next section, let us look at forward pass and backward pass methods in detail. In forward pass, you can either go through the network starting at time zero and keep calculating the time required to complete each of the activity until you reach the last activity of the project. The starting time for each of the activity in this approach is called “early start” and the end time for each activity is called the “early finish”. Alternatively, in backward pass, you can travel through the network from the project end date and calculate the time required to complete each activity. The end date in this approach is called the late finish and the start date of the activity in this approach is called the late start. The float of the activity is either the difference between the late start and early start or the difference between the late finish and early finish. Both the differences work out to be the same. Before the start of the CAPM exam, please make a note of the total float formula. Let us understand the critical path calculation with an example in the next section. Let us look at an example of the critical path. There are five activities in this project and two paths in the network diagram. Start, 1, 2, 4, 5, End is one path and Start, 1, 3, 5, End is the second path. Since the duration of the path Start, 1, 2, 4, 5, End is 18 days, which is more than the duration of the path Start, 1, 3, 5, End, the critical path of the project is Start, 1, 2, 4, 5, End. Let us take activity 3 as an example. First, calculate the early start and early finish dates. Activity 3 can start only after activity 1. Since the early finish of activity 1 is 3, it becomes the early start of activity 3. Activity 3 cannot start earlier than 3, because activity 1 can be completed only by then. Therefore, early start of activity 3 is 3. Since the duration of the activity is 4 days. The early finish of activity 3 is 3 + 4 = 7 days. Now, let us calculate the late start and late finish of activity 3. The late start of activity 5 is 14 days. The activity 3 happens just before activity 5, so the late finish of activity 3 is 14 days. To calculate the late start, you can subtract the duration from the late finish. Therefore, the late start of activity 3 is 14 - 4 = 10. In the next section, let us focus on schedule compression. Schedule compression is done to optimize the schedule or to meet some externally imposed deadline on the project. For example, you followed critical path method and arrived at the project duration of, say 100 days, whereas the customer wants to get the project done in 95 days. In such cases, schedule compression techniques are used to see if the schedule can be compressed to reduce the time by 5 days. In fast-tracking, activities that normally happen in the sequence are checked if they can happen in parallel. Typically, this would involve sacrificing some of the discretionary dependencies. If more activities can be done in parallel, it speeds up the project. For example, if you look at the diagram shown above, activities B and C were to be done in sequence. However, if you can find a way to do them in parallel, you may be able to save time. Crashing involves increasing the cost to save time. For example, you can use more resources or resources that are more skilled or advanced techniques to compress the timelines. However, you would end up making cost and schedule tradeoffs to determine how to obtain the greatest amount of schedule compression for the least incremental cost while maintaining the project scope. Concept-based questions on schedule compression can be expected in the exam. So ensure that you have a good understanding of the topic. In the next section, let us look at an example of schedule compression. Look at the four activities in the table. The normal cost of executing each of the activities as well as how much each of the activities can be crashed is also provided. Which activity would you crash to reduce the project time by 1 day? Activity A will be crashed if the duration of the project is reduced by 1 day, as the ‘per unit cost’ of crashing the activity A is the least. All the activities are assumed to be on the critical path here. While crashing, if you end up saving time on a path, which is not the critical path, you will not end up saving time on the project. In the next section, let us look into the impact of schedule compression. In the next section, let us look into the other techniques used in scheduling. What-if scenario analysis: In this technique, questions like, “What if a particular thing changed on the project, would that produce a shorter schedule?” are put forth to understand the impact of specific changes on the schedule. The goal is to produce a realistic schedule. Monte Carlo analysis: In this method, a computer simulates the outcome of a project, making use of randomly generated values that map the probability distribution of the input variables. Together, these two techniques are called modeling techniques. Resource optimization techniques: These are also used to produce a resource-limited schedule. Resource optimization results in a more stable number of resources used in the project. Critical chain method: This technique develops the project schedule that takes into account, both the activity and resource dependencies. In the next section, let us look at the last process in project time management, control schedule. Control schedule is concerned with determining the status of the project schedule, determining that the project schedule has changed, and managing the actual changes as they occur. The project schedule is an important input to this process. It is the actual schedule that needs to be controlled. Schedule data contains information related to the schedule that may need to be monitored in order to take actions to bring the project back on schedule. Project calendar describes the working hours and days for the project. Work performance data has information like which activities have started, their progress, and which activities have finished. The other inputs of this process are project management plan and organizational process assets. The key tools and techniques of this process are the performance reviews. Performance review is measuring, comparing, and analyzing schedule performance such as actual start and finish dates, percent complete, and remaining duration of work in progress. In develop schedule process, these techniques are used for the first time to develop the project schedule, whereas, in control schedule process, these techniques are used to update the project schedule. The key output of the control schedule process is the work performance information. This is represented in the form of schedule variance (SV) and schedule performance index (SPI). As part of the control schedule process, the project team will generate forecasts, likely schedule for forthcoming activities and project as a whole. The other outputs include organizational process assets updates, change requests, project management plan updates, and project documents updates. In one of Janice’s project team meetings, her team is reporting the status of their assigned activities defined in the project schedule. About halfway through the process, a problem with the schedule starts to unveil itself, as several activities are behind schedule. After all activities for this phase of the project are reported, the overall schedule is determined to be progressing at about a 75% productivity rate. Janice has to figure out how she can get the schedule back on track. How can Janice go about solving this schedule problem? To aid in the decision-making process, Janice needs to schedule a follow-up meeting with her team to evaluate the impact of this delay to the triple constraint. The delivery of the scope, budget, along with the quality expectations, resource availability and limitations and risk factors have to be evaluated. This is done so that she can present a strategy for correcting the project’s schedule delay to the Project Sponsor. Due to the team’s assessment, Janice can make the decision to add additional resources to the scheduled activities on the critical path. Before crashing the critical path, the team can identify a series of sequenced activities that could be rearranged and completed in parallel to free up more resources that could be re-allocated to the critical path activities. By utilizing these schedule compression techniques, Janice will be able to decrease the lag time of her project by increasing the project’s productivity rate to 95%. Let us now check your understanding of the topics covered in this lesson. Project time management includes the processes required to manage the timely completion of the project. A project schedule defines the start and end dates of the project and the project activities. These activities are assigned a duration and sequenced in a logical order. Gantt charts and network diagrams are used to identify project activities and determine the relationships and dependencies between them. Gantt chart displays the start and end dates of project activities, the overall project schedule, and the logical task relationships while network diagram is used to plot the activity dependencies. Plan Schedule Management, Define Activities, Sequence Activities, Estimate Activity Resources, Estimate Activity Durations, Develop Schedule, and Control Schedule are the seven processes under Project Time Management. Schedule network analysis technique generates project schedule based on the estimates of time and resource requirements. With this, we have come to the end of this lesson. In the next lesson, we will cover Project Cost Management. "Straight-forward and easy to follow with the PMBOK. Enjoyed the class and I look forward to taking the CA..." "Simplilearn is an amazing online learning platform! I have enrolled in the Salesforce & CAPM courses. The..." "I had enrolled in CAPM course from Simplilearn. I found it really helpful and easy to learn. There are ev..." "CAPM certification from Simplilearn helped me to transform my career from supervisor to team leader in my..." "It was really a great experience for me to learn from Simplilearn. Being in the field of PMO I have gaine..." "Simplilearn CAPM Course is well explained, easily comprehended and well paced. The short intervals of les..."
2019-04-19T15:24:08Z
https://www.simplilearn.com/project-time-management-tutorial
The All Wales Convention chaired by Emyr Jones Parry has invited people in Wales to submit their views on the matter of a referendum for primary lawmaking powers to be devolved to the Assembly. There are two ways people or organizations can do this. One is easy, one is a little more complicated but will probably carry more weight. There is no reason why people can't do both. The easier way is to submit your views in the form of a short letter or email. The more formal way is to submit formal evidence either orally or in writing. This topic is the place to offer suggestions and examples about how to do the first. There is a similar topic about how to submit formal evidence here. • Are the current powers available to the National Assembly enough? • Is it the right time for Wales to take the next step towards full law making powers? Although it is rather tempting to give two one word answers, the reality is that the Convention will be looking for a more informed response. So the trick of making a good submission is to show that you understand the issues and why they are important, and that you have good reasons for the opinion you present. There is no "right" or "wrong" answer. This is a matter of political opinion, and there will be a good number of people writing to the AWC to make precisely the opposite points. Neither is the point of writing to win some sort of argument. The members of the AWC will all have their own opinions anyway! They are trying to assess the views of the general public, and you are writing as a member of the public in order to inform them of your views. In subsequent posts you will see, hopefully, a good number of suggestions and examples from other people. It is important that you do not simply copy their views. If the AWC get 300 letters or emails each saying the same thing, they will quite rightly smell a rat. Of course, your views may be substantially or absolutely identical to those of others, but you have to show the AWC that what you are writing is your own opinion. I welcome the establishment of the Convention and believe it will be a valuable exercise in communicating with the people of Wales the current settlement and the implications of a move to primary legislative powers. I would like to express my views on the questions that the All-Wales Convention are putting to the people of Wales. Are the current powers available to the National Assembly sufficient? I believe that the current constitutional arrangement is too constraining to be completely effective. While some good has been achieved with the limited powers now at our disposal, in order to truly benefit Welsh people and promote real devolution, primary legislative powers need to be devolved to the National Assembly. The main disadvantage of the current arrangement is bureaucracy. The current process in place for devolving primary legislative powers, the LCO system, is inefficient and complex, frustrating the Welsh Assembly Government’s ability to deliver for the people of Wales. This has been demonstrated recently by the deadlock over the “right to buy” order. As Housing is a devolved issue, how can it be right that a committee of Westminster MPs can block the implementation of policies that the Welsh Assembly Government was democratically elected to pursue? Is it the right time for Wales to take the next step towards full law making powers? Yes, Wales needs these powers as soon as possible. The key to democracy is accountability. The muddle of responsibility between Wales and London means that the lines of accountability are lost, leaving the electorate confused and frustrated with the system. In order to move forward, we must clarify the role of both Parliament and the National Assembly so that it is easy to understand. The public should know who to hold to account, how their democracy is run and how and who to appeal to. A successful referendum would be an important step in clarifying this confusing situation. I believe the referendum should be held before the 2011 Elections to the National Assembly for Wales. I believe that this referendum can and will be won if the process is explained clearly and the changes to the current system are represented honestly. The achievements of the Welsh Assembly Government have been done in spite of the constitutional arrangements, and it is high time that we moved out of the nonsensical half-way-house we now occupy. For people, such as myself, who care about health, education, housing and ensuring a sustainable future, the move to primary powers is of vital importance. How can the National Assembly deliver for the people of Wales with one hand tied behind its back? I want further powers for the Welsh Assembly to be devolved as soon as possible and I hope that the Convention will recommend that a referendum be held before the end of the current term. Again, please remember that this is a suggestion of the sort of letter you might write. Do not quote it verbatim, it will be counterproductive. I can hardly expect others to post their submissions without posting my own. So here is a draft of what I intend to send to the Convention. I would, of course, welcome any comments or suggestions about how to improve it. I welcome the opportunity to offer my views on the two questions the Convention has asked. I think it is widely recognised by the majority of people in Wales that responsibility for the currently devolved areas is a good thing. It has enabled us to adopt policies which are more suited to the democratically expressed wishes of the people of Wales than a "one size fits all" UK wide approach would have allowed. However, in order to do that job the Assembly needs the ability to change or make new laws in these areas of devolved responsibility. Although the LCO process is a means by which the Assembly can acquire lawmaking powers, I believe that the process is flawed and, as such, will not prove robust enough to continue to do so when political circumstances change. This is because the LCO system only capable of transferring competence to make laws with the agreement of all the parties involved, namely the Assembly, both Houses at Westminster and the Secretary of State for Wales. At present the same political party is in power in Westminster and Cardiff. Should future elections result in differing political parties being in power, it seems highly likely that the LCO process would be thwarted. This would be a disaster for democracy. When the 2006 GoW Act was passed, the principle of Westminster's involvement was that it should exercise joint scrutiny with the Assembly. However scrutiny is not the same thing as a veto. The process of scrutiny, as exercised by various committees in Westminster, is to improve proposed legislation. These committees make recommendations, but ultimately their recommendations can be overruled. However, what we have clearly seen over the past year is the Welsh Affairs Committee using the process of scrutiny as a means of vetoing a proposed policy. I believe that this was not the intention of the Act, and amounts to Westminster misusing its powers. It was never the intention that Westminster should approve or disapprove the specific details of policy involved in any proposed legislation, but instead to consider the more general issue of whether the Assembly should have the competence to legislate (in either one way or another, both immediately and in the future) in that particular area. Another weakness is the sheer bureaucracy of the LCO system. It is a model of over-intricacy that can only be described as Byzantine. If we judge it against the criteria of openness, simplicity and value for money it fails on all three. It is not a model for good governance. Is it the right time for Wales to take the next step towards full lawmaking powers? The 20 areas or "subjects" on which the Assembly would have power to legislate are set out in Schedule 7 of the GoW Act, together with fully detailed exceptions. Obviously this list was "hammered out", but nonetheless it is a firm agreement as to what the Assembly can legislate on and, conversely, what it cannot. It seems pointless (a waste of time, effort and money) to go through the LCO process dozens of times when a list of subjects on which the Assembly can legislate has already been agreed. It is understandable that would have been some concern over the Assembly's ability to scrutinize, and this is what led to the current joint scrutiny arrangement for LCOs. However I believe that the Assembly has risen to show itself entirely capable of doing the job of scrutiny. In fact there are some ways in which Assembly's form of scrutiny, with greater openness and transparency through scrutiny in plenary session as well as in committee, and especially through commissioning surveys of public opinion, is an improvement on the traditional Westminster model. Although devolution in Wales and Scotland is overwhelmingly thought of as a good thing (only a very small minority in either country would want to reverse it, according to opinion polls) the unequal nature of devolution has proved a source of resentment, not least in England. For any who are concerned with preserving the UK, the need for fair and equitable devolution arrangements is paramount, so any move that makes the Welsh settlement more equal to that in Scotland should be welcomed. In my opinion the crucial issue is timing. The process by which a referendum will be triggered depends on four factors: the Assembly, the Westminster Parliament, the Secretary of State for Wales and, most importantly, the Welsh people. I have little doubt that there it would get the required two-thirds majority in the Assembly. Two parties (Plaid Cymru and the Lib Dems) are unequivocally in favour, and one party (Labour) has agreed to support it as part of the One Wales programme of government. So even if there were to be a few rebels, the two-thirds would easily be achieved. Nor do I have any doubt over the outcome of a referendum when the politicians finally agree to give the people of Wales the chance to vote. Although there have been comparatively few polls, those that have been conducted all show more support for "more powers" for the Assembly than against. In fact some of them even show support for more powers than the Assembly would get after this referendum; for example tax raising powers, or having the same powers as the Scottish Parliament. The polls also show that a very substantial majority believe that the Assembly should have more influence over their lives than Westminster. The crucial factor is Westminster. It is highly unlikely that a Conservative Government would pass such legislation since they have not made any statement indicating they would, even though their Roberts Review gave them the opportunity to make their position clear. Therefore the only realistic hope of getting the referendum through Westminster is while there is a Labour majority (and therefore also a Labour Secretary of State). Of course the timing of a Westminster election is not fixed, but it seems safe to assume that Labour will remain in power until at least May 2010 since, if they call an election before then, it will only be if they think they will win it. Thus the only safe way of getting the referendum through its Westminster stages is to do it before May 2010. If we do not take this opportunity now, we will have lost it for probably another five or ten years. The AWC has indicated that it will report late in 2009, so a "Yes" recommendation given then will allow the referendum to be approved by both Cardiff and Westminster within that six month window. As for the date of the referendum itself, there would need to be a proper interval for the "Yes" and "No" campaigns to set themselves up and engage with the public. Therefore a referendum either late in 2010 or early in 2011 would seem entirely appropriate. Finally, I see no reason why the referendum should not be held on the same day as the Assembly elections in 2011. In fact I think there are many good reasons for holding it then, not least because it will tend to maximize turnout and thus be more representative of public opinion. I do not accept that holding different types of vote on the same day is in any way undemocratic or likely to confuse the minds of voters. It is common practice in many countries, and has been done before in the UK. In fact, because of the relatively small amount of coverage Welsh politics receives in the newspapers and broadcast media, holding the two votes on the same day is likely to increase understanding of the issues involved because, in this case, the issues are inextricably linked. Thank you for posting your draft, MH. You've prompted me to have a go. I would like to express my views on the questions that the All-Wales Convention has put to the people of Wales. Not entirely. The widely held belief is that the Assembly has the powers to make legislation in areas of devolved responsibility such as Health, Education and Housing. But in reality the Assembly only has those powers if it can persuade MPs and the Secretary of State for Wales to grant it permission to legislate on a case by case basis. This split responsibility damages the political process because most people believe the AMs they elect are ultimately responsible for the way these key areas are run, when in fact MPs in Westminster can prevent them running these services in the way they were elected to do. Moreover the process by which law-making powers can currently be passed to the Assembly is cumbersome, expensive and has not worked as well as anticipated. As evidence of this, about a dozen LCOs were proposed in the first year of this Assembly term but only one managed to make its way through the system. The Wales Office then unilaterally announced that it could not deal with more than about four or five LCOs a year, meaning that it would take a whole Assembly term to pass what was originally thought of as a year's legislation. It is intolerable that the people of Wales should have to wait for morsels to be rationed out in this way. The whole system is like some Heath Robinson contraption to cut the top off a boiled egg: after hours of pedalling, lever pulling and cranking it finally achieves the desired result—one time in ten. Everyone celebrates and pats each other on the back. But using a knife or spoon would have produced the same result far more simply every time—and while the egg was still warm enough to eat! Yes. For the reasons given above, I believe Wales needs to have a simpler and more straightforward means of legislating in areas for which it has devolved responsibility. I would prefer that the Senedd had the same powers and responsibilities as the Scottish Parliament. I believe the people of Wales also support this idea, because polls over the last few years have consistently shown more people want a proper Parliament with tax setting powers than the status quo. However this is not what is on offer under GOWA 2006. This brings me to my major concern. It is unfortunate that the debate on the issue still has people believing that a Yes vote in the referendum will actually give us more than is on offer, for example taxation powers. One of the prime tasks of the AWC must be to make it clear that what we will be asked to vote Yes or No to is not greater areas of devolved responsibility, but simply the ability to pass laws in the areas of responsibility the Assembly already has without having to seek permission on a case by case basis. If this were more clearly understood by the general public, the support for a Yes vote would be very much higher than the polls indicate. In essence, a Yes vote in the referendum would give a black and white list of what the Assembly is responsible for (schedule 7). I believe it both necessary and desirable for us to be able to elect our AMs on the basis of clearly defined responsibilities, so that they can be held to account for their actions rather than be given the excuse of saying that Westminster prevented them from doing what they promised. We must elect AMs to do one job and MPs to do another, but at present we have a recipe for muddle and confusion which can only further alienate people from the democratic process. For this reason I think it essential that the referendum is held either late in 2010 or early in 2011, so as to allow at least two months before the Assembly election in May 2011. This would give the political parties time to formulate manifestos based on what they are actually empowered to deliver. Well, what do you think? Don't be too cruel. First let me say congratulations to those nvolved in setting up this forum. I particularly like the idea of getting people involved by writing to the All Wales Convention. I didn't realise that it would be so easy to do. Obviously I'll be voting Yes when the referendum is called, but we need to start by making sure that we get the referendum in the first place. It's clear that Labour MPs like Peter Hain are doing their best to scupper the One Wales Agreement to hold the referendum on or before May 2011. They are hoping that the AWC will report that there isn't enough support for more Assembly powers, and they will use that as an excuse to put things off. So it's up to us to not let that happen. They can't ignore a few thousand letters of support. Since its inception, the powers given to the National Assembly have been insufficient to make a real difference for Wales. I believe that Wales should have had a Parliament with the same powers as the Scottish Parliament from the outset, and that one of the reasons why support for it was not greater a decade ago was because it could with some justification be described as a glorified talking shop. Many of the flaws and weaknesses of the original Assembly were addressed by the Richard Commission, and some of its suggestions were implemented in the Government of Wales Act 2006. So what we have now is better than what we had, but the essential weakness remains the same: that it is a body with no real power to change anything. It can, as it always has, been able to decide spending priorities but it still can't make or change any laws. To say that the LCO system is a way of making laws is, in my opinion, just a piece of political rhetoric that has not been borne out in reality. The Assembly can only pass laws if Westminster agrees with them. It is clear that in practice Westminster does not examine the principle of the Assembly gaining law making powers in any area, but instead exercises a veto on the content of the law, seeking to narrow the scope of the LCO so that the Assembly can only do what Westminster approves. If that happens when the same party is in government in both Cardiff Bay and Westminster, it is almost inevitable that it will break down completely when different parties are in power. Perhaps it was designed to do so. But any system that can only work when one particular party is in power is fundamentally undemocratic. Every democracy has different levels of elected government, and the key is to clearly define the respective powers and responsibilities of each. At present Wales has a muddle in which the powers of Westminster and Cardiff Bay are defined in ways that only a constitutional expert could understand. This is bad for democracy. Therefore we should move to a simple list of matters that the National Assembly can legislate on, as contained in Schedule 7 of GOWA 2006. This will mean that the electorate will know exactly what the AMs they elect can and cannot do. Definitely. The LCO system is simply not workable in the long term. The progress of LCOs through the system is slow and inefficient. The number of LCOs that can be dealt with at any one time is limited by the workload of MPs, who have said that they cannot cope with more than about five a year. So far they have failed to handle even that much. Opinion polls have shown that more people in Wales are in favour of the Assembly moving to primary law making powers than are against. This number can only increase as the log jam in the LCO system becomes more obvious. For this reason I believe that the referendum should be held sooner rather than later, and definitely before the end of the current Assembly term. I've been thinking about this for the past few weeks, and the problem is that there's so much that could be said. I must have tried about five or six different versions, and they all ended up being far too long. In terms of detail, the constitutional situation of Wales is not easy to understand. But in contrast the principle of devolution is well understood: namely that the National Assembly, rather than the UK Parliament, should be responsible for key areas such as Health, Education, Housing and Transport. In the ten years in which we have had the Assembly, public acceptance of it has grown. Now only 15% would want to return to the situation we had before, and 49% (if we include the 10% who want independence) want the Assembly to advance to the situation where it not only has primary lawmaking powers, but also tax raising powers similar to those of the Scottish Parliament (1). Moreover 61% are of the opinion that the Assembly, rather than Westminster, should have most influence over people in Wales (2). However the current constitutional settlement for Wales is lopsided. While it is undoubtedly true that the government formed from those we elect to the Assembly has responsibility for policy in the devolved areas, the Assembly does not have a corresponding ability to make laws in those same areas. This is a fundamental flaw, since the ability to legislate is one of the main tools that any government, anywhere in the world, should have at its disposal. At present our Assembly has responsibility without power. This is very dangerous for democracy. The principle of democracy is that the politicians we elect should be accountable to us, and that there should be clearly defined areas of responsibility so that one set of politicians cannot hide behind or blame another set of politicians for failing to fulfill the commitments on which they were elected. It is intolerable to elect a government which wants to reform, say, Transport or the Health Service if they are then prevented by another set of politicians from legislating to bring about such reform. We therefore need a single defined list of those areas in which the Assembly can legislate. Schedule 7 of the Government of Wales Act 2006 is that list, agreed with Westminster, and now only requiring endorsement by the people of Wales in a referendum. When the GOWA was passed, there was every possibility that the intermediate way of gaining lawmaking powers on a case by case basis through LCOs might work. The Assembly would apply for an LCO, and the Westminster Parliament would grant it if it believed that the lawmaking power sought was commensurate with the Assembly's area of devolved responsibility. In objective terms, the granting of an LCO could have been a comparatively simple process. Indeed, both the Assembly and Westminster thought it would be, as the intention to make about a dozen requests for the first year was announced in June 2007. However the reality has proved to be very different. It has become increasingly clear that Westminster has no intention of passing LCOs within the timescale originally envisaged. Westminster has declared it will not process more than four or five a year, has effectively prevented the Assembly from releasing the wording of ELOs without them first being agreed with the Wales Office, and has now (in January) introduced the concept of a veto by the Secretary of State for Wales on proposed legislation, even once the LCO has been granted. This is completely alien to the GOWA. In short, it is clear that the current administration in Westminster wants to use the LCO process as a way of slowing down, if not preventing altogether, the move towards giving the Assembly primary lawmaking powers. As exactly the same Government that passed the GOWA is still in power at Westminster, it is almost impossible to come to any conclusion other than that this was how they intended to LCO process to operate. For that reason, as well as the overriding reason of principle, I think it has now become clear that we must ditch the LCO process and replace it with the single list of areas within which the Assembly can legislate, i.e. Schedule 7. As the saying goes, if something ain't broke, don't fix it. But the corollary is that if something is broke, it needs to be fixed. The LCO process is not working ... in fact, it is almost unworkable. It is bureaucratic, time consuming and expensive. The people of Wales need to be given the opportunity of replacing it with a simpler defined list of areas in which the Assembly can legislate. This needs to happen as soon as possible. I humbly submit to this Convention that it should recommend that the referendum on primary lawmaking powers be held at the earliest opportunity, either before or at the same time as the 2011 Assembly election. I don't know about others, but I don't think I can match some of the other submissions I've read so far. They seem to be very technical. So I hope there's also a place for saying something shorter and simpler. No. I think it is unfair that Wales should have an Assembly without the ability to make laws on its own, while Scotland has a full Parliament with the ability to make laws. We elect our Assembly Members to be responsible for running things like the Health Service and Education. They need to have the necessary tools with which to do that job, which include being able to pass laws within areas of devolved responsibility. In my opinion the LCO system is a hopelessly convoluted system by which to gain the power to legislate. The Assembly is perfectly capable of deciding for itself whether legislation is appropriate, without having to ask permission from MPs in Westminster every time. We have elected them to do a job, and if we don't like the decisions they make we can get rid of them at the ballot box. It is to us, the voters of Wales, that our AMs are accountable, not to MPs at Westminster. Westminster should adopt a hands off approach to areas which have been devolved to Wales. Yes. In my opinion the Assembly should have had full lawmaking powers at the outset. Failing that we should have had them as a result of what the Richard Commission recommended. Each time we were let down. We now have a third opportunity to gain full lawmaking powers by means of a referendum. Perhaps the previous disappointments were only to prove the truth of the saying, "Tri chynnig i Gymro" ... neu Gymraes! In my opinion the Welsh public is now more ready than ever to endorse full lawmaking powers in a referendum. The polls certainly confirm a growing margin in favour of this. In June 2007 the margin was 3 percentage points, in February 2008 the margin was 7 percentage points (1). That margin can only continue to widen, especially because we have since then seen the LCO process virtually grind to a halt, and now descend into farce over the Right to Buy issue. Westminster also refused to allow us to introduce a law to ban smacking, even though there was a clear majority of AMs in favour of it. If we believe in a democratic society, we must not be afraid of letting the people vote on the issue in a referendum. The parties we have elected to government in Wales have committed themselves to holding that referendum by 2011. That commitment must be honoured. P.S. Thanks for the link, Aderyn. It's very good to see what others think, and I want to thank people for what they've written so far. I don't think it is my job to comment on what people have written, because we will each have our own views and our own way of expressing them. Suffice to say that if I thought anything that had been written was factually incorrect, I would say something. I know from experience that if someone wants to argue against a particular point of view, they will home in on the one or two things that might be wrong and use those as an excuse to discredit any and every good point it might contain. However I would particularly like to pick up on what Mond y fi said in her post. There is nothing wrong with writing a short and simple letter. In fact it may well carry more punch than a longer, more detailed letter. Do what you feel comfortable with. I would also advise people not to worry if they find themselves making the same points as others have already made. There are only so many ways of saying that you want the Assembly to have full law making powers, and that you want it now! But it is important that people write in for themselves, even if they think that others have already said what they would want to say, or said it better than they could say it. Bodies like the All Wales Convention tend to be modeled on Public Inquiries and Committees of Parliament. They are set up on quasi-judicial lines and, like any court of law, can only consider the evidence placed before them. If we can't be bothered to put our views to them, they have every right to ignore us. I am pleased that we are having the opportunity to discuss the future options for the governance of Wales and would take this opportunity to thank the All Wales Convention for the opportunity to do so. With that in mind,I would like to express my views on the questions that the All-Wales Convention are putting to the people of Wales. The current arrangements have much more to do with papering over the cracks of a political party than they are to the effective governance of Wales. There is no doubt that from the outset the Assembly should have been provided with adequate powers to act as the conduit for the hopes and aspirations of the people of Wales. However it is proving to be ever more difficult to get the legislation required because of the over bureaucratic nature of the process. Even with the present limited options much good has been completed by the Assembly, but we could have done so much more had the process not been so convoluted. The current process leads to nothing more than an "Arm Wrestling" contest between the Assembly and vested interests within Westminster and Whitehall. Given that the Westminster Government isLabour run and the Assembly government is a Labour /Plaid coalition, one would expect both institutions to get on reasonably well. However that is not the case and with the likelihood of the next Westminster administration being a Conservative one I can see the problems growing many fold. I believe the time is right, in fact the sooner the better.The majority of Welsh people now believe that the Assembly has the most influence on their lives and by providing it with Primary law making powers it will become ever more relevant to their everyday lives. I am tired of being treated within a UK context as some sort of invalid. Is it any wonder that we Welsh are short on self confidence when we are continually being told that unlike Scotland and Northern Ireland, we do not have the ability to have a greater say on running our own affairs? It is particularly galling when the stalling tactics come from some of our own"Welsh" M.P's. If we want organ donation to depend on opt out rather than opt in, our Welsh government should be able to implement that change without having to go cap in hand to Westminster, as Health is a devolved matter. Surely that is the whole idea of devolving power, the principle ofsubsidiarity , which works well in most of the other successful Countries of the world. The need to provide the Welsh government with primary lawmaking powers is unfinished business and as devolution is of course a process and not an event, it is long overdue. Although at present the biggest enemy the pro devolution forces have is voter apathy, I feel we must have a referendum before the arrival of the next Conservative administration and certainly no later than 2011. The symmetry of the current devolution settlement within the UK is out of kilter and as such is inherently unfair to the people of Wales and this issue must be addressed. I have no doubt that when the Welsh public, through the good work of the All Wales convention and others, become aware of the overwhelming need to right this wrong they will vote positively in the forthcoming referendum. The only people to gain from delaying the referendum are in truth the enemies of Wales. I sincerely look forward to attending the forthcoming meetings of the All Wales Convention and discussing the advantages of taking the next steps in the devolution process.
2019-04-25T04:11:24Z
http://syniadau.forumotion.net/t62-how-to-submit-views-to-the-all-wales-convention
Disclosed herein are vaso-occlusive devices for forming occluding the vasculature of a patient. More particularly, disclosed herein are vaso-occlusive devices comprising at least one polymer structure and methods of making and using these devices. Compositions and methods for repair of aneurysms are described. In particular, stretch-resistant vaso-occlusive devices are described, including stretch-resistant vaso-occlusive devices with flexible, articulating detachment junctions. There are a variety of materials and devices which have been used for treatment of aneurysms, including platinum and stainless steel microcoils, polyvinyl alcohol sponges (Ivalone), and other mechanical devices. For example, vaso-occlusion devices are surgical implements or implants that are placed within the vasculature of the human body, typically via a catheter, either to block the flow of blood through a vessel making up that portion of the vasculature through the formation of an embolus or to form such an embolus within an aneurysm stemming from the vessel. One widely used vaso-occlusive device is a helical wire coil having windings that may be dimensioned to engage the walls of the vessels. (See, e.g., U.S. Pat. No. 4,994,069 to Ritchart et al.). Coil devices including polymer coatings or attached polymeric filaments have also been described. See, e.g., U.S. Pat. Nos. 5,226,911; 5,935,145; 6,033,423; 6,280,457; 6,287,318; and 6,299,627. For instance, U.S. Pat. No. 6,280,457 describes wire vaso-occlusive coils having single or multi-filament polymer coatings. U.S. Pat. Nos. 6,287,318 and 5,935,145 describe metallic vaso-occlusive devices having a braided polymeric component attached thereto. U.S. Pat. No. 5,382,259 describes braid structures covering a primary coil structure. In addition, coil designs including stretch-resistant members that run through the lumen of the helical vaso-occlusive coil have also been described. See, e.g., U.S. Pat. Nos. 5,582,619; 5,833,705; 5,853,418; 6,004,338; 6,013,084; 6,179,857; and 6,193,728. However, none of these documents describe stretch-resistant vaso-occlusive devices as described herein, stretch-resistant vaso-occlusive devices that are flexible with respect to the detachment junction, or methods of making and using such devices. Thus, this invention includes novel occlusive compositions as well as methods of using and making these compositions. In one aspect, the invention includes a vaso-occlusive assembly comprising a core element having a proximal end, a distal end and an outer surface, the proximal end of the core element attached to a detachment junction at the distal end of a pusher wire; and at least one polymer structure surrounding a substantial portion of the surface of the core element, the polymeric structure attached to distal end of the core element and to the detachment junction. In certain embodiments, the core element comprises a helically wound coil, for example a wire formed into a helically wound primary shape. In certain embodiments, the helically wound primary shape self-forms into a secondary shape (e.g., cloverleaf shaped, helically-shaped, figure-8 shaped, flower-shaped, vortex-shaped, ovoid, randomly shaped, or substantially spherical shape) upon deployment. The core element is preferably electrolytically detachable from the pusher wire. In any of the assemblies described herein, the polymer structure may comprise a tubular braid structure, for example a braid comprising at least one polymer selected form group consisting of PET, PLGA, and Nylon. Furthermore, in any of the assemblies described herein, at least one component (e.g., the vaso-occlusive device) may be radioopaque. In another aspect, the invention includes any of the assemblies described herein further comprising a three-dimensional structure at the distal end of the detachment junction, and wherein the polymer structure at least partially surrounds the three-dimensional structure and further wherein a flexible joint between the three-dimensional structure and the core element is created by the polymer structure. In certain embodiments, the three-dimensional structure at the distal end of the detachment junction is a ball-like structure. In yet another aspect, the invention includes a method of making an assembly as described herein, the method comprising the steps of (a) securing the polymer structure to the proximal and distal ends of the core element; and (b) attaching the proximal end of the core element to the distal end of a pusher wire, the distal end of the pusher wire comprising an electrolytically detachable junction member. In certain embodiments, step (b) is performed prior to step (a). In other embodiments step (a) and step (b) are performed concurrently. In still further embodiments, step (b) is performed prior to step (a) and further wherein the polymer structure is also secured to the electrolytically detachable junction member. In any of the methods described herein, the core element may comprise a helically wound coil. Furthermore, any of these methods may further comprise the step of forming an end cap at the distal end of the core element (e.g., helically wound coil) from the polymer. In certain embodiments, the polymer structure is secured to the core element and/or junction member using heat. In other embodiments, the polymer structure is secured to the core element and/or junction member using one or more adhesives. In still further embodiments, the polymer structure is secured to the core element and/or junction member using heat and one or more adhesives. In yet another aspect, the invention includes a method of at least partially occluding an aneurysm, the method comprising the steps of introducing any of the vaso-occlusive assemblies described herein into the aneurysm and detaching the core element from the detachment junction, thereby deploying the core element into the aneurysm. Furthermore, any of the assemblies or devices described herein may further include one or more additional components. FIG. 1 is a side view depicting an exemplary vaso-occlusive assembly as described herein. FIG. 2 is a side view depicting another exemplary vaso-occlusive assembly as described herein having an external polymer covering. FIG. 3 is a side view depicting another exemplary vaso-occlusive assembly as described herein a first vaso-occlusive coil surrounded by a second vaso-occlusive coil. FIG. 4 is a side view depicting yet another exemplary vaso-occlusive assembly as described herein in having a ball-like structure positioned near the distal end of the detachment zone. FIGS. 5A and 5B are side views depicting another exemplary vaso-occlusive assembly as described herein including a flexible joint created by the polymer structure when it is placed over the core element and over distal end of the pusher wire. Stretch-resistant occlusive (e.g., embolic) compositions are described. The compositions described herein find use in vascular and neurovascular indications and are particularly useful in treating aneurysms, for example small-diameter, curved or otherwise difficult to access vasculature, for example aneurysms, such as cerebral aneurysms. Methods of making and using these vaso-occlusive are also aspects of this invention. Unlike previously described stretch resistant vaso-occlusive coils, the devices described herein exhibit enhanced stretch resistance (tensile strength) without the need for stretch resistant members within the lumen of the coil device. Instead, stretch resistance is imparted by the use of a polymer structure (e.g., tubular braided structure) covering at least part of the underlying core element (e.g., the coil) and at least part of the detachment junction. Such designs not only exhibit greater stretch resistance than previously described devices, they also exhibit reduced friction and are much simpler to manufacture. Furthermore, unlike currently available stretch-resistant designs, the devices described herein may be designed to include flexible, articulating detachment junctions. As noted above, implantable devices may be conveniently detached from the deployment mechanism (e.g., pusher wire) by the application of electrical energy, which dissolves a suitable substrate at the selected detachment junction. However, many available electrolytically detachable implants are inflexible in or near the detachment junction. As a result of this inflexibility, the force exerted on the pusher wire by the operator can result in catheter kickback during placement or detachment (i.e., the tip of the catheter is displaced out of the aneurysm when the force exerted on the coil via the pusher wire is transmitted back to the catheter) and/or in inefficient detachment of the coil. Thus, the devices and assemblies described herein are stretch resistant and, in addition, result in increased flexibility and articulation of the implantable device with respect to the deployment mechanism (e.g., pusher wire and/or catheter). The detachment junction is preferably electrolytically detachable, but may also be adapted to be mechanically detachable (upon movement or pressure) and/or detached upon the application of heat (thermally detachable), the application of radiation, and/or the application of electromagnetic radiation. Advantages of the present invention include, but are not limited to, (i) the provision of stretch-resistant, low-friction vaso-occlusive devices; (ii) the provision of implantable devices that are articulate around the detachment junction, thereby reducing catheter kickback effects; (iv) the provision of occlusive devices that can be retrieved and/or repositioned after deployment; and (v) cost-effective production of these devices. It must be noted that, as used in this specification and the appended claims, the singular forms “a”, “an”, and “the” include plural referents unless the content clearly dictates otherwise. Thus, for example, reference to a device comprising “a polymer” includes devices comprising of two or more polymers. The vaso-occlusive devices described herein comprise a core element covered by at least one polymer structure, preferably a braid. The polymer structure may be made up of two or more polymer filaments, for example constructs comprising filamentous elements assembled by one or more operations including coiling, twisting, braiding, weaving or knitting of the filamentous elements. The polymer(s) making up the structures described herein may be selected from a wide variety of materials. One such example is a suture-type material. Synthetic and natural polymers, such as polyurethanes (including block copolymers with soft segments containing esters, ethers and carbonates), polyethers, polyamides (including nylon polymers and their derivatives), polyimides (including both thermosetting and thermoplastic materials), acrylates (including cyanoacrylates), epoxy adhesive materials (two part or one part epoxy-amine materials), olefins (including polymers and copolymers of ethylene, propylene butadiene, styrene, and thermoplastic olefin elastomers), fluoronated polymers (including polytetrafluoroethylene), polydimethyl siloxane-based polymers, cross-linked polymers, non-cross linked polymers, Rayon, cellulose, cellulose derivatives such nitrocellulose, natural rubbers, polyesters such as lactides, glycolides, trimethylene carbonate, caprolactone polymers and their copolymers, hydroxybutyrate and polyhydroxyvalerate and their copolymers, polyether esters such as polydioxinone, anhydrides such as polymers and copolymers of sebacic acid, hexadecandioic acid and other diacids, or orthoesters may be used. Thus, the polymer structures described herein may include one or more absorbable (biodegradable) polymers and/or one or more non-absorbable polymers. The terms “absorbable” and “biodegradable” are used interchangeable to refer to any agent that, over time, is no longer identifiable at the site of application in the form it was injected, for example having been removed via degradation, metabolism, dissolving or any passive or active removal procedure. Non-limiting examples of absorbable proteins include synthetic and polysaccharide biodegradable hydrogels, collagen, elastin, fibrinogen, fibronectin, vitronectin, laminin and gelatin. Many of these materials are commercially available. Fibrin-containing compositions are commercially available, for example from Baxter. Collagen containing compositions are commercially available, for example from Cohesion Technologies, Inc., Palo Alto, Calif. Fibrinogen-containing compositions are described, for example, in U.S. Pat. Nos. 6,168,788 and 5,290,552. Mixtures, copolymers (both block and random) of these materials are also suitable. Preferred biodegradable polymers include materials used as dissolvable suture materials, for instance polyglycolic and/or polylactic acids (PGLA) to encourage cell growth in the aneurysm after their introduction. Preferred non-biodegradable polymers include polyethylene teraphthalate (PET or DACRON™), polypropylene, polytetraflouroethylene, or Nylon materials. Highly preferred are PET or PGLA. The polymeric structure is used to partially or completely cover a core element. The core element may be made of a variety of materials (e.g., metal, polymer, etc.) and may assume a variety of tubular structures, for examples, braids, coils, combination braid and coils and the like. Thus, although depicted in the Figures described below as a coil, the inner member may be of a variety of shapes or configuration includes, but not limited to, braids, knits, woven structures, tubes (e.g., perforated or slotted tubes), cables, injection-molded devices and the like. See, e.g., U.S. Pat. No. 6,533,801 and International Patent Publication WO 02/096273. The core element preferably changes shape upon deployment, for example change from a constrained linear form to a relaxed, three-dimensional (secondary) configuration. See, also, U.S. Pat. No. 6,280,457. In a particularly preferred embodiment, the core element comprises at least one metal or alloy. Suitable metals and alloys for the core element include the Platinum Group metals, especially platinum, rhodium, palladium, rhenium, as well as tungsten, gold, silver, tantalum, and alloys of these metals. The core element may also comprise of any of a wide variety of stainless steels if some sacrifice of radio-opacity may be tolerated. Very desirable materials of construction, from a mechanical point of view, are materials that maintain their shape despite being subjected to high stress. Certain “super-elastic alloys” include nickel/titanium alloys (48-58 atomic % nickel and optionally containing modest amounts of iron); copper/zinc alloys (38-42 weight % zinc); copper/zinc alloys containing 1-10 weight % of beryllium, silicon, tin, aluminum, or gallium; or nickel/aluminum alloys (36-38 atomic % aluminum). Particularly preferred are the alloys described in U.S. Pat. Nos. 3,174,851; 3,351,463; and 3,753,700. Especially preferred is the titanium/nickel alloy known as “nitinol.” These are very sturdy alloys that will tolerate significant flexing without deformation even when used as a very small diameter wire. If a super-elastic alloy such as nitinol is used in any component of the device, the diameter of the wire may be significantly smaller than that used when the relatively more ductile platinum or platinum/tungsten alloy is used as the material of construction. These metals have significant radio-opacity and in their alloys may be tailored to accomplish an appropriate blend of flexibility and stiffness. They are also largely biologically inert. In a preferred embodiment, the core element comprises a metal wire wound into a primary helical shape. The core element may be, but is not necessarily, subjected to a heating step to set the wire into the primary shape. The diameter of the wire typically making up the coils is often in a range of 0.0005 and 0.050 inches, preferably between about 0.001 and about 0.004 inches in diameter. Methods of making vaso-occlusive coils having a linear helical shape and/or a different three-dimensional (secondary) configuration are known in the art and described in detail in the documents cited above, for example in U.S. Pat. No. 6,280,457. Thus, it is further within the scope of this invention that the vaso-occlusive device as a whole or elements thereof comprise secondary shapes or structures that differ from the linear coil shapes depicted in the Figures, for examples, spheres, ellipses, spirals, ovoids, figure-8 shapes, etc. The devices described herein may be self-forming in that they assume the secondary configuration upon deployment into an aneurysm. Alternatively, the devices may assume their secondary configurations under certain conditions (e.g., change in temperature, application of energy, etc.). The polymeric structures are secured, at least, to the proximal end of the core element. Furthermore, as shown in the Figures, the polymer structure is also in contact with, and preferably secured to, the electrolytically detachable junction at the distal end of the pusher wire. The polymeric structure is also optionally secured near or at the distal end of the core element, for example so as to create an end cap on the distal end of the core element that may ease deployment. Alternatively, the optional end cap may be formed from a different polymer(s) than used to cover the core element. The polymeric structure(s) may be combined with the core element in any fashion. For example, the polymeric structures may be wound around the core element or, alternatively, may be shaped into a tubular sheath that surrounds the core element. The polymer component may adhere to the core element in one or more locations, for example by heating (melting) of the polymer or by use of adhesives (e.g., EVA) to the polymer or to the core element), heat setting so as to shrink the polymer(s) onto the core element, or by other suitable means. The polymer component may completely cover the core element (as shown in FIG. 1B) or may be added to the core element such that one or more regions of the core element are not covered. It will be apparent that the process used to attach the polymer to the core element will depend on the nature of the polymer. For example, it will be preferable not to heat certain polymers (e.g., PGLA) as heating may cause degradation of PGLA. Furthermore, the polymeric component may be added to the core element before or after the core element is shaped into a primary and/or secondary configuration. The polymeric component may be added before or after the core element is attached to a detachable junction. Typically, the core element is attached to a detachment junction at its proximal end. See, also, Examples. Methods of connecting a core element to a pusher wire having an electrolytically detachable junction are well known and described for example in U.S. Pat. Nos. 6,620,152; 6,425,893; 5,976,131 5,354,295; and 5,122,136. It will be apparent that the detachment junction may also include additional polymers to which the core element and polymer coverings are secured. For example, when the core element is secured to the detachment junction prior to addition of the polymeric structure, the distal end of the detachment junction may comprise a polymer such as PET. The use of the polymer structures attached to known vaso-occlusive devices (core elements) as described herein results in much less friction upon delivery and/or deployment and, in addition, increases the stretch-resistance (tensile strength) of the devices. Depicted in the Figures are exemplary embodiments of the present invention in which the core element is depicted as a helically wound metallic coil. It will be appreciated that the drawings are for purposes of illustration only and that other implantable devices can be used in place of embolic coils, for example, stents, filters, and the like. Furthermore, although depicted in the Figures as embolic coils, the embolic devices may be of a variety of shapes or configuration including, but not limited to, open and/or closed pitch helically wound coils, braids, wires, knits, woven structures, tubes (e.g., perforated or slotted tubes), injection-molded devices and the like. See, e.g., U.S. Pat. No. 6,533,801 and International Patent Publication WO 02/096273. It will also be appreciated that the devices and assemblies can have various configurations as long as they are stretch resistant and/or exhibit the required flexibility. FIG. 1 is a schematic depicting an exemplary stretch-resistant device as described herein. The device comprises a helically wound core element 10 covered by a tubular polymeric braid structure 20. Also shown in FIG. 1 is detachment junction 30 and pusher wire 40. The tubular polymer braid 20 is secured to the proximal and distal ends of the core element 10 and to the distal region of the detachment junction 30. As shown in FIG. 1, the device optionally includes end caps 50, 55, to ease the potential of the core element to cause trauma to the target vessel. Optional end caps 50, 55 are depicted in FIG. 1 as formed from polymer braid 20. Alternatively, optional end caps 50, 55 may be formed from different polymers or from the core element. One or both of the end caps may be present. FIG. 2 shows an embodiment in which the tubular braid 20 is secured to the distal end of the detachment region 30 using an electrically insulated coil 45 structure. It will be apparent that the polymeric braid can be secured near the distal end of the detachment region by any suitable means, for example by melting or gluing. Furthermore, the detachment region 30 may further include an additional polymer on its distal end. FIG. 3 shows an exemplary embodiment similar to FIG. 2 but further comprising a second helically wound vaso-occlusive device (coil) 65 surrounding the helically wound core element 10 (and tubular braid 20). As shown in FIG. 2, it is preferred that the second coil 65 is shorter than or an equivalent length as the core element 10. In certain instances, the second coil 65 may be longer than the core element 10, so long as it does not restrict flexibility of the core element 10 with respect to the detachment zone 30. Second coil 65 may be wholly or partially electrically insulated or wholly or partially electrically conductive. Like the embodiment in FIG. 3, the tubular braid 20 of the embodiment shown in FIG. 3 is secured near the distal end of the detachment region 30 using an electrically insulated coil 45 structure. Furthermore, although depicted in FIG. 3 as separate components, it will be apparent that the second outermost helically wound vaso-occlusive device 65 and the electrically insulated coil 45 structure securing the tubular braid 20 to the detachment zone 30 can be a single component, formed, for example, by helically winding an electrically insulated wire in the configuration shown in FIG. 3. When second device 65 and securing coil 45 are a single component, one or more regions of the second device 65 may have electrical insulation removed therefrom. In any of the exemplary devices described herein, the polymeric braid may be loaded onto core element before or after the core element is secured near the distal end of the detachment zone. As noted above, stretch-resistant vaso-occlusive devices as described herein are conveniently detached from the deployment mechanism (e.g., pusher wire) by the application of electrical energy, which dissolves a suitable substrate at the selected detachment junction. The present invention also relates to flexible detachment junctions, which result in reduced catheter kickback and more efficient deployment. In particular, flexibility at the detachment zone may be imparted by attaching the polymer to the detachment junction in such a way that the stretch-resistant device is free to pivot with respect to the pusher wire. FIG. 4 depicts an exemplary stretch resistant device 15 in which pusher wire 40 comprises a ball-like structure 25 at its distal end. The ball-like structure 25 is covered by an electrically insulated material 27. The stretch resistant device 15 includes a core element 10 and polymer covering 20. The polymer covering 20 covers the core element 10 and the distal portion of the detachment junction 30 of the pusher wire 40 including the ball-like structure 25 and insulating material 27. As a result of covering the distal end of the detachment junction 30 with the polymer structure 20 used to cover the core element 10, a flexible, articulating joint is created between the core element 10 and detachment junction 30. FIG. 5A is a schematic showing an embodiment similar to the one depicted in FIG. 4 in a linear configuration. FIG. 5B shows how the flexible joint allows the core element 10 to pivot with respect to the pusher wire 40. Although illustrated in the Figures as a ball-like structure, it will be apparent that flexibility may be imparted by the inclusion of virtually any three-dimensional structure, or in some cases, simply by using the distal end of the pusher wire 40, so long as a flexible joint is created by from the polymer coating 20. Non-limiting examples of suitable three-dimensional structures include ball-like structures, other spherical shapes, ovoid shapes, cubes, etc. It will also be apparent that one or more additional polymers may be included at one or more regions of the assembly, for example at the distal end of the detachment junction 30. Furthermore, as noted above, the polymer may be combined with the core element before, concurrently or after the core element combining with the pusher wire having a detachment junction at its distal end. In other words, the core element may be combined with the pusher wire using standard techniques to form a GDC detachment junction and, subsequently, a polymer structure may be applied to the core element-pusher wire assembly. Alternatively, the core element may be first combined with a polymer structure, which is subsequently combined with the pusher wire to form a GDC junction. Then again, the core element and pusher wire may be combined using the polymer structure to form the GDC junction. The polymer structure may be combined with the core element and detachment junction using any of the methods described above, including, but not limited to, melting, adhesives and/or heat shrinking. One or more of the components of the devices described herein (e.g., polymer covering, core element) may also comprise additional components (described in further detail below), such as co-solvents, plasticizers, radio-opaque materials (e.g., metals such as tantalum, gold or platinum), coalescing solvents, bioactive agents, antimicrobial agents, antithrombogenic agents, antibiotics, pigments, radiopacifiers and/or ion conductors which may be coated using any suitable method or may be incorporated into the element(s) during production. In addition, lubricious materials (e.g., hydrophilic) materials may be used to coat one or more members of the device to help facilitate delivery. Cyanoacrylate resins (particularly n-butylcyanoacrylate), particular embolization materials such as microparticles of polyvinyl alcohol foam may also be introduced into the intended site after the inventive devices are in place. Furthermore, previously described fibrous braided and woven components (U.S. Pat. No. 5,522,822) may also be included, for example surrounding the polymeric structure-covered core elements described herein. One or more bioactive materials may also be included. See, e.g., co-owned U.S. Pat. No. 6,585,754 and WO 02/051460. The term “bioactive” refers to any agent that exhibits effects in vivo, for example a thrombotic agent, an anti-thrombotic agent (e.g., a water-soluble agent that inhibits thrombosis for a limited time period, described above), a therapeutic agent (e.g., chemotherapeutic agent) or the like. Non-limiting examples of bioactive materials include cytokines; extracellular matrix molecules (e.g., collagen); trace metals (e.g., copper); and other molecules that stabilize thrombus formation or inhibit clot lysis (e.g., proteins or functional fragments of proteins, including but not limited to Factor XIII, α2-antiplasmin, plasminogen activator inhibitor-1 (PAI-1) or the like). Non-limiting examples of cytokines which may be used alone or in combination in the practice of the present invention include, basic fibroblast growth factor (bFGF), platelet derived growth factor (PDGF), vascular endothelial growth factor (VEGF), transforming growth factor beta (TGF-β) and the like. Cytokines, extracellular matrix molecules and thrombus stabilizing molecules (e.g., Factor XIII, PAI-1, etc.) are commercially available from several vendors such as, for example, Genzyme (Framingham, Mass.), Genentech (South San Francisco, Calif.), Amgen (Thousand Oaks, Calif.), R&D Systems and Immunex (Seattle, Wash.). Additionally, bioactive polypeptides can be synthesized recombinantly as the sequences of many of these molecules are also available, for example, from the GenBank database. Thus, it is intended that the invention include use of DNA or RNA encoding any of the bioactive molecules. Cells (e.g., fibroblasts, stem cells, etc.) can also be included. Such cells may be genetically modified. Furthermore, it is intended, although not always explicitly stated, that molecules having similar biological activity as wild-type or purified cytokines, extracellular matrix molecules and thrombus-stabilizing proteins (e.g., recombinantly produced or mutants thereof) and nucleic acid encoding these molecules are intended to be used within the spirit and scope of the invention. Further, the amount and concentration of liquid embolic and/or other bioactive materials useful in the practice of the invention can be readily determined by a skilled operator and it will be understood that any combination of materials, concentration or dosage can be used, so long as it is not harmful to the subject. A selected site is reached through the vascular system using a collection of specifically chosen catheters and/or guide wires. It is clear that should the site be in a remote site, e.g., in the brain, methods of reaching this site are somewhat limited. One widely accepted procedure is found in U.S. Pat. No. 4,994,069 to Ritchart, et al. It utilizes a fine endovascular catheter such as is found in U.S. Pat. No. 4,739,768, to Engelson. First of all, a large catheter is introduced through an entry site in the vasculature. Typically, this would be through a femoral artery in the groin. Other entry sites sometimes chosen are found in the neck and are in general well known by physicians who practice this type of medicine. Once the introducer is in place, a guiding catheter is then used to provide a safe passageway from the entry site to a region near the site to be treated. For instance, in treating a site in the human brain, a guiding catheter would be chosen which would extend from the entry site at the femoral artery, up through the large arteries extending to the heart, around the heart through the aortic arch, and downstream through one of the arteries extending from the upper side of the aorta. A guidewire and neurovascular catheter such as that described in the Engelson patent are then placed through the guiding catheter. Once the distal end of the catheter is positioned at the site, often by locating its distal end through the use of radiopaque marker material and fluoroscopy, the catheter is cleared. For instance, if a guidewire has been used to position the catheter, it is withdrawn from the catheter and then the assembly, for example including the absorbable vaso-occlusive device at the distal end, is advanced through the catheter. Once the selected site has been reached, the vaso-occlusive device is extruded, for example by loading onto a pusher wire. Preferably, the vaso-occlusive device is loaded onto the pusher wire via an electrolytically cleavable junction (e.g., a GDC-type junction that can be severed by application of heat, electrolysis, electrodynamic activation or other means). Additionally, the vaso-occlusive device can be designed to include multiple detachment points, as described in co-owned U.S. Pat. Nos. 6,623,493 and 6,533,801 and International Patent publication WO 02/45596. They are held in place by gravity, shape, size, volume, magnetic field or combinations thereof. To test the stretch-resistant properties of the devices described herein, the following experiments were performed. Two-inch long core platinum linear coils (0.00175″ wire diameter and 0.006 inner diameter of coil) were covered with either a PET braid (12 end, 80° braiding angle, 0.013″ (Secant)) or a PLGA braid (16 end, high braiding angle, 0.013″ (Secant)). For the PET covered coil, the PET tubular braid was loaded over the coil and melted to the distal and proximal ends. A short PET plug was inserted into the Pt coil to increase bonding strength. The PET at the proximal end of the Pt coil was then joined to an electrolytically detachable junction on the distal end of a pusher wire using standard GDC processing techniques. For the PGLA covered coil, a platinum coil was joined to an electrolytically detachable junction on the distal end of a pusher wire using standard GDC processing techniques, using a PET junction. Subsequently, the PGLA tubular braid was slid over the Pt coil and glued to the distal end of the coil with Dymax 1128 UV curable adhesive. The proximal end of the PGLA tubular braid was also glued to the PET junction (at the proximal end of the coil) using the same adhesive. Tensile strength of the PET- and PGLA-covered was compared to currently available stretch resistant coil designs available as GDC™-10 Soft SR coils from Boston Scientific. Currently available stretch-resistant designs included sutures through the lumen of a helically wound coil. Tensile testing was conducted using equipment available from Instron®. In particular, tensile tests as between the distal portion of the coil 0.5 inches from the tip and the pusher wire at 2 inches/minute were conducted. Results of 5 separate experiments are shown in Table 1. Thus, the devices described herein exhibit approximately 3 to 3.5 fold increased stretch resistance as compared to currently available stretch resistant designs (inner suture or thread designs). In particular, the PET braid melt design improved stretch-resistance by approximately 3.5 fold over current designs, while the PGLA design (glued) improved stretch-resistance by approximately 3 fold over current designs. Modifications of the procedure and vaso-occlusive devices described above, and the methods of using them in keeping with this invention will be apparent to those having skill in this mechanical and surgical art. These variations are intended to be within the scope of the claims that follow. wherein the core element is electrolytically detachable from a pusher wire. 2. The vaso-occlusive assembly of claim 1, wherein the core element comprises a helically wound coil. 3. The vaso-occlusive assembly of claim 1, wherein the polymer structure comprises a tubular braid structure. 4. The vaso-occlusive assembly of claim 3, wherein the braid comprises at least one polymer selected form group consisting of PET, PLGA, and Nylon. 5. The vaso-occlusive assembly of claim 1, wherein the core element comprises a wire formed into a helically wound primary shape. 6. The vaso-occlusive assembly of claim 5, where the core element has a secondary shape that self-forms upon deployment. 7. The vaso-occlusive assembly of claim 6, where the secondary shape is selected from the group consisting of cloverleaf shaped, helically-shaped, figure-8 shaped, flower-shaped, vortex-shaped, ovoid, randomly shaped, and substantially spherical. 8. The vaso-occlusive assembly of claim 1, wherein the device is radioopaque. 9. The vaso-occlusive assembly of claim 1, wherein the core element is electrolytically detachable from a pusher wire. 10. The vaso-occlusive assembly of claim 1, further comprising a three-dimensional structure at the distal end of the detachment junction, wherein the polymer structure at least partially surrounds the three-dimensional structure and further wherein a flexible joint between the three-dimensional structure and the core element is created by the polymer structure. 11. The vaso-occlusive assembly of claim 10, wherein the three-dimensional structure at the distal end of the detachment junction is a ball-like structure. wherein the distal end of the pusher wire comprises an electrolytically detachable junction member. 13. The method of claim 12, wherein step (a) is performed prior to step (b). 14. The method of claim 12, wherein step (a) and step (b) are performed concurrently. 15. The method of claim 12, wherein step (b) is performed prior to step (a) and further wherein the core element is also secured to the electrolytically detachable junction member. 16. The method of claim 12, wherein the core element comprises a helically wound coil. 17. The method of 16, further comprising the step of forming an end cap at the distal end of the helically wound coil from the polymer. 18. The method of claim 12, wherein the polymer structure is secured to the core element and/or junction member using heat. 19. The method of claim 12, wherein the polymer structure is secured to the core element and/or junction member using one or more adhesives. 20. The method of claim 12, wherein the polymer structure is secured to the core element and/or junction member using heat and one or more adhesives. 21. A method of at least partially occluding an aneurysm, the method comprising the steps of introducing a vaso-occlusive assembly according to claim 1 into the aneurysm and detaching the polymeric structure from the detachment junction, thereby deploying the core element into the aneurysm.
2019-04-19T09:00:38Z
https://patents.google.com/patent/US8002789B2/en
Caché SQL automatically uses a Query Optimizer to create a query plan that provides optimal query performance in most circumstances. This Optimizer improves query performance in many ways, including determining which indices to use, determining the order of evaluation of multiple AND conditions, determining the sequence of tables when performing multiple joins, and many other optimization operations. You can supply “hints” to this Optimizer in the FROM clause of the query. This chapter describes tools that you can use to evaluate a query plan and to modify how Caché SQL will optimize a specific query. SQL Runtime Statistics to generate performance statistics on query execution. Index Analyzer to display various index analyzer reports for all queries in the current namespace. This shows how Caché SQL is going to execute the query, giving you an overall view of how indices are being used. This index analysis may indicate that you should add one or more indices to improve performance. Show Plan to display the optimal (default) execution plan for an SQL query. Alternate Show Plans to display available alternate execution plans for an SQL query, with statistics. Index Optimization Options available FROM clause options governing all conditions, or %NOINDEX prefacing an individual condition. Parallel Query Processing available %PARALLEL keyword FROM clause option allows multi-processor systems to divide query execution amongst the processors. Cached Queries to enable Dynamic SQL queries to be rerun without the overhead of preparing the query each time it is executed. SQL Statements to preserve the most-recently compiled Embedded SQL query. In the “SQL Statements and Frozen Plans” chapter. Frozen Plans to preserve a specific compile of an Embedded SQL query. This compile is used rather than a more recent compile. In the “SQL Statements and Frozen Plans” chapter. Defining Indices can significantly speed access to data in specific indexed fields. This chapter also describes how to Write Query Optimization Plans to a File, and how to generate an SQL Troubleshooting Report to submit to InterSystems WRC. Generate Report to submit an SQL performance report to InterSystems WRC (Worldwide Response Center customer support). To use this reporting tool you must first get a WRC tracking number from the WRC. Import Report for InterSystems use only. You can use SQL Runtime Statistics to measure the performance of query execution on your system. SQL Runtime Statistics measures the performance of SELECT, INSERT, UPDATE, and DELETE operations (collectively known as query operations). This feature is off by default. After activating it, you must recompile SQL queries. You can use the Caché Management Portal or the %SYS.PTools.SQLStats class to collect performance statistics on an SQL query. By using this class you can determine for each SQL query: the compile time, the number of global references, the number of lines of code executed, the number of times a module is called, the total execution time, the time to first row, disk wait (the disk read access time, in milliseconds), and the number of rows processed. The Management Portal SQL Runtime Statistics tab. (From the Management Portal, select System Explorer, then Tools, then SQL Performance Tools, then SQL Runtime Statistics). The SetSQLStats() or SetSQLStatsJob() method. For either of these interfaces, you specify one of the following options: 0 turn off statistics code generation; 1 turn on statistics code generation for all queries, but do not gather statistics (the default); 2 record statistics for just the outer loop of the query (gather statistics at the open and close of the query); 3 record statistics for all module levels of the query. Modules can be nested. If so, the MAIN module statistics are inclusive numbers, the overall results for the full query. For SetSQLStatsJob() the options differ slightly. They include: -1 turn off statistics for this job; 0 use the system setting value. The 1, 2, and 3 options are the same as SetSQLStats() and override the system setting. The default is 0. To go from 0 to 1: after changing the SQL Stats option, runtime Routines and Classes that contain SQL will need to be compiled to perform statistics code generation. For xDBC and Dynamic SQL, you must purge cached queries to force code regeneration. To go from 1 to 2: you simply change the SQL Stats option to begin gathering statistics. This allows you to enable SQL performance analysis on a running production environment with minimal disruption. To go from 1 to 3 (or 2 to 3): after changing the SQL Stats option, runtime Routines and Classes that contain SQL will need to be compiled to record statistics for all module levels. For xDBC and Dynamic SQL, you must purge cached queries to force code regeneration. Option 3 is commonly only used on an identified poorly-performing query in a non-production environment. To go from 1, 2, or 3 to 0: to turn off statistics code generation you do not need to purge cached queries. This information is stored in %SYS.PTools.SQLQuery and %SYS.PTools.SQLStats. Purging a cached query purges any related SQL Stats data. Dropping a table or view purges any related SQL Stats data. From the Management Portal select System Explorer, then Tools, then select SQL Performance Tools, then SQL Runtime Statistics and click the View Stats tab. This gives you an overall view of the runtime statistics that have been gathered on this system. You can click on a View Stats column to sort the query statistics. You can then click Show Plan for a specific query. WRITE !,"end of query result set",!! WRITE "Inserted ",%ROWCOUNT," rows in table SQLCODE=",SQLCODE,! WRITE !!,"End of SQL Statistics" You can export query performance statistics to a text file. By default, columns in this text file are delimited by tabs. If you don't specify a filename argument, these methods create a .psql file in the Mgr directory, using your system ID, Caché installation directory, and Caché version to generate a file name. If you specify a filename argument, these methods create a file in the Mgr subdirectory for the current namespace, or in the path location you specify. This export is limited to data in the current namespace. The Export() method of %SYS.PTools.SQLStats: This method is used to export statistics data from %SYS.PTools.SQLStats classes to a delimited text file. The ExportAll() method of %SYS.PTools.SQLStats: This method is used to export from %SYS.PTools.SQLQuery and %SYS.PTools.SQLStats classes to a delimited text file. It exports the SQL statement text, the statistics data, and, optionally, the SQL Show Plan. The SQL Runtime Statistics tool can be used to display the Show Plan for a query with runtime statistics. The Alternate Show Plans tool can be used to compare show plans with stats, displaying runtime statistics for a query. The Alternate Show Plans tool in its Show Plan Options displays estimated statistics for a query. If gathering runtime statistics is activated, its Compare Show Plans with Stats option displays actual runtime statistics; if runtime statistics are not active, this option displays estimate statistics. Indexing provides a mechanism for optimizing queries by maintaining a sorted subset of commonly requested data. Determining which fields should be indexed requires some thought: too few or the wrong indices and key queries will run too slowly; too many indices can slow down INSERT and UPDATE performance (as the index values must be set or updated). To determine if adding an index improves query performance, run the query from the Management Portal SQL interface and note in Performance the number of global references. Add the index and then rerun the query, noting the number of global references. A useful index should reduce the number of global references. You can prevent use of an index by using the %NOINDEX keyword as preface to a WHERE clause or ON clause condition. An INNER JOIN should have indices on both ON clause fields. You should index fields that are specified in a WHERE clause equal condition. You may wish to index fields that are specified in a WHERE clause range condition, and fields specified in GROUP BY and ORDER BY clauses. Under certain circumstances, an index based on a range condition could make a query slower. This can occur if the vast majority of the rows meet the specified range condition. For example, if the query clause WHERE Date < CURRENT_DATE is used with a database in which most of the records are from prior dates, indexing on Date may actually slow down the query. This is because the Query Optimizer assumes range conditions will return a relatively small number of rows, and optimizes for this situation. You can determine if this is occurring by prefacing the range condition with %NOINDEX and then run the query again. If you are performing a comparison using an indexed field, the field as specified in the comparison should have the same collation type as it has in the corresponding index. For example, the Name field in the WHERE clause of a SELECT or in the ON clause of a JOIN should have the same collation as the index defined for the Name field. If there is a mismatch between the field collation and the index collation, the index may be less effective or may not be used at all. For further details, refer to Index Collation in the “Defining and Building Indices” chapter of this manual. For details on how to create an index and the available index types and options, refer to the CREATE INDEX command in the Caché SQL Reference, and the “Defining and Building Indices” chapter of this manual. $SYSTEM.SQL.SetDDLPKeyNotIDKey() to use the PRIMARY KEY as the IDKey index. $SYSTEM.SQL.SetFastDistinct() to use indices for SELECT DISTINCT queries. For further details, refer to SQL configuration settings described in Caché Advanced Configuration Settings Reference. The Management Portal Index Analyzer SQL performance tool. The %SYS.PTools.SQLUtilities methods IndexUsage(), TableScans(), TempIndices(), and JoinIndices(). From the Management Portal Tools interface, select System Explorer, then Tools, then select SQL Performance Tools, then Index Analyzer. It provides an SQL Statement Count display for the current namespace, and four index analysis report options. At the top of the SQL Index Analyzer there is an option to count all SQL statements in the namespace. Press the Gather SQL Statements button. The SQL Index Analyzer displays “Gathering SQL statements ....” while the count is in progress, then “Done” when the count is complete. SQL statements are counted in three categories: a Cached Query count, a Class Method count, and a Class Query count. These counts are for the entire current namespace, and are not affected by the Include System Queries? option or the Schema Selection option. However, note that running an SQL Index Analyzer Report Option with a Schema Selection generates 1 cached query. Running the Index usage option generates an additional 3 cached queries (a total of 4 if Schema Selection is specified). These generated cached queries will be counted in subsequent use of Gather SQL Statements. Repeated use of the different Report Option choices with different schema selections does not generate additional cached queries. The corresponding method is GetSQLStatements() in the %SYS.PTools.SQLUtilities class. Index usage: This option takes all of the cached queries in the current namespace, generates a Show Plan for each and keeps a count of how many times each index is used by each query and the total usage for each index by all queries in the namespace. This can be used to reveal indices that are not being used so they can either be removed or modified to make them more useful. The result set is ordered from least used index to most used index. The corresponding method is IndexUsage() in the %SYS.PTools.SQLUtilities class. Queries with table scans: This option identifies all queries in the current namespace that do table scans. Table scans should be avoided if possible. A table scan can’t always be avoided, but if a table has a large number of table scans, the indices defined for that table should be reviewed. Often the list of table scans and the list of temp indices will overlap; fixing one will remove the other. The result set lists the tables from largest Block Count to smallest Block Count. A Show Plan link is provided to display the Statement Text and Query Plan. The corresponding method is TableScans() in the %SYS.PTools.SQLUtilities class. Queries with temp indices: This option identifies all queries in the current namespace that build temporary indices to resolve the SQL. Sometimes the use of a temp index is helpful and improves performance, for example building a small index based on a range condition that Caché can then use to read the master map in order. Sometimes a temp index is simply a subset of a different index and might be very efficient. Other times a temporary index degrades performance, for example scanning the master map to build a temporary index on a property that has a condition. This situation indicates that a needed index is missing; you should add an index to the class that matches the temporary index. The result set lists the tables from largest Block Count to smallest Block Count. A Show Plan link is provided to display the Statement Text and Query Plan. The corresponding method is TempIndices() in the %SYS.PTools.SQLUtilities class. Queries with missing JOIN Indices: This option examines all queries in the current namespace that have joins, and determines if there is an index defined to support that join. It ranks the indices available to support the joins from 0 (no index present) to 4 (index fully supports the join). Outer joins require an index in one direction. Inner joins require an index in both directions. The result set only contains rows that have a JoinIndexFlag < 4. JoinIndexFlag=4 means there is an index that fully supports the join; these are not listed. The corresponding method is JoinIndices() in the %SYS.PTools.SQLUtilities class, which provides descriptions of the JoinIndexFlag values. When you select one of these options, the system automatically performs the operation and displays the results. The first time you select an option or invoke the corresponding method, the system generates the results data; if you select that option or invoke that method again, Caché redisplays the same results. To generate new results data you must use the Gather SQL Statements button to reinitialize the Index Analyzer results tables. To generate new results data for the %SYS.PTools.SQLUtilities methods, you must invoke GetSQLStatements() to reinitialize the Index Analyzer results tables. Changing the Include System Queries? check box option also reinitializes the Index Analyzer results tables. "FROM %SYS_PTools.SQLUtilities GROUP BY Type" "FROM %SYS_PTools.SQLUtilResults ORDER BY UsageCount" WRITE !,"End of utilities data",!! WRITE !,"End of results data" Note that because results are ordered by UsageCount, indices with UsageCount > 0 are listed at the end of the result set. By default, the Caché SQL query optimizer uses sophisticated and flexible algorithms to optimize the performance of complex queries involving multiple indices. In most cases, these defaults provide optimal performance. However, in infrequent cases, you may wish to give “hints” to the query optimizer by specifying optimize-option keywords. The FROM clause supports the %ALLINDEX and %IGNOREINDEX optimize-option keywords. These optimize-option keywords govern all index use in the query. They are described in detail in the FROM clause reference page of the Caché SQL Reference. You can use the %NOINDEX condition-level hint to specify exceptions to the use of an index for a specific condition. The %NOINDEX hint is placed in front of each condition for which no index should be used. For example, WHERE %NOINDEX hiredate < ?. This is most commonly used when the overwhelming majority of the data is selected (or not selected) by the condition. With a less-than (<) or greater-than (>) condition, use of the %NOINDEX condition-level hint is often beneficial. With an equality condition, use of the %NOINDEX condition-level hint provides no benefit. With a join condition, %NOINDEX is not supported for =* and *= WHERE clause outer joins; %NOINDEX is supported for ON clause joins. Show Plan displays the execution plan for SELECT, UPDATE, DELETE, TRUNCATE TABLE, and some INSERT operations. These are collectively known as query operations because they use a SELECT query as part of their execution. Show Plan is performed when a query operation is prepared; you do not have to actually execute the query operation to generate an execution plan. Show Plan displays what Caché considers to be the optimal execution plan. However, for most queries there is more than one possible execution plan. You can also display alternate show plans. From the Management Portal SQL interface. Select System Explorer, then SQL. Select a namespace with the Switch option at the top of the page. (You can set the Management Portal default namespace for each user.) Write a query (either in the text box, or by using Query Builder). Then press the Show Plan button. (You can also invoke Show Plan from the Show History listing by clicking the plan option for a listed query.) See Executing SQL Statements in the “Using the Management Portal SQL Interface” chapter of this manual. From the Query Test tab: Select a namespace with the Switch option at the top of the page. Write a query in the text box. Then press the Show Plan with SQL Stats button. This generates a Show Plan without executing the query. From the View Stats tab: Press the Show Plan button for one of the listed queries. The listed queries include both those written at Execute Query, and those written at Query Test. SET mysql(1)="SELECT TOP 10 Name,DOB FROM Sample.Person " SET mysql(2)="WHERE Name [ 'A' ORDER BY Age" SET cqsql(1)="SELECT TOP :i%PropTopNum Name,DOB FROM Sample.Person " SET cqsql(2)="WHERE Name [ :i%PropPersonName ORDER BY Age" Show Plan by default returns values in Logical mode. However, when invoking Show Plan from the Management Portal or the SQL Shell, Show Plan uses Runtime mode. Statement Text replicates the original query, with the following modifications: The Show Plan button from the Management Portal SQL interface displays the SELECT query prefaced with DECLARE QRS CURSOR FOR (QRS is Query Result Set). This is done to allow Show Plan to use a frozen plan. The Show Plan button display also performs literal substitution, replacing each literal with a ?, unless you have suppressed literal substitution by enclosing the literal value in double parentheses. These modifications are not done when displaying a show plan using the ShowPlan() method, or when displaying alternate show plans. “Frozen Plan” is the first line of Query Plan if the query plan has been frozen; otherwise, the first line is blank. “Relative cost” is an integer value which is computed from many factors as an abstract number for comparing the efficiency of two queries. This calculation takes into account (among other factors) the complexity of the query, the presence of indices, and the size of the table(s). “Relative cost not available” is returned by certain aggregate queries, such as COUNT(*) or MAX(%ID) without a WHERE clause. The Query Plan consists of a main module, and (when needed) one or more subcomponents. One or more module subcomponents may be shown, named alphabetically, starting with B: Module B, Module C, etc.), and listed in the order of execution (not necessarily alphabetically). When the end of the alphabet is reached, additional modules are numbered, parsing Z=26, so the next module after Module Z is Module 27. A module performs processing and populates an internal temp-file (internal temporary table) with its results. One or more subquery subcomponents may be shown; each subquery is shown as a separate subquery module in the order specified in the query. Subquery modules are not named. If a subquery calls a module, the module is placed after the subquery and given an appropriate non-sequential alphabetical name. Therefore, a query plan could contain a main module that calls Module B and a Subquery that calls Module H. Non-query INSERT: An INSERT... VALUES() command does not perform a query, and therefore does not generate a Query Plan. Query always FALSE: In a few cases, Caché can determine when preparing a query that a query condition will always be false, and thus cannot return data. The Show Plan informs you of this situation in the Query Plan component. For example, a query containing the condition WHERE %ID IS NULL or the condition WHERE Name %STARTSWITH('A') AND Name IS NULL cannot return data, and therefore Caché generates no execution plan. Rather than generating an execution plan, the Query Plan says “Output no rows”. If a query contains a subquery with one of these conditions, the subquery module of the Query Plan says “Subquery result NULL, found no rows”. This condition check is limited to a few situations involving NULL, and is not intended to catch all self-contradictory query conditions. Invalid query: Show Plan displays an SQLCODE error message for most invalid queries. However, in a few cases, Show Plan displays as empty. For example, WHERE Name = $$$$$ or WHERE Name %STARTSWITH('A") (note single-quote and double-quote). In these cases, Show Plan displays no Statement Text, and Query Plan says [No plan created for this statement]. This commonly occurs when quotation marks delimiting a literal are imbalanced. It also occurs when you specify two or more leading dollar signs without specifying the correct syntax for a user-defined (“extrinsic”) function. You can display alternate execution plans for a query using the Management Portal or the ShowPlanAlt() method. From the Management Portal System Explorer, select Tools, SQL Performance Tools, Alternate Show Plans. Using this tool, you input a query then press the Show Plan Options button to display multiple alternate show plans. Select the plans that you wish to compare, then press the Compare Show Plans with Stats button to run them and display their SQL statistics. The ShowPlanAlt() method shows all of the execution plans for a query. It first shows the plan the Caché considers optimal (lowest cost), the same Show Plan display as the ShowPlan() method. ShowPlanAlt() then allows you to select an alternate plan to display. Alternate plans are listed in ascending order of cost. Specify the ID number of an alternate plan at the prompt to display its execution plan. ShowPlanAlt() then prompts you for the ID of another alternate plan. To exit this utility, press the return key at the prompt. SET mysql(1)="SELECT TOP 4 Name,DOB FROM Sample.Person ORDER BY Age" To display an alternate plan, specify the plan’s ID number from the displayed list and press Return. To exit ShowPlanAlt(), just press Return. Also refer to the PossiblePlans methods in the %SYS.PTools.SQLUtilities class. The Show Plans Options lists assigns each alternate show plan a Cost value, which enables you to make relative comparisons between the execution plans. The Alternate Show Plan details provides for each Query Plan a set of stats (statistics) for the Query Totals, and (where applicable) for each Query plan module. The stats for each module include Time (overall performance, in seconds), Globals (number of global references), Commands (number of commands executed), and Disk Wait (disk read latency, in milliseconds). The Query Totals stats also includes Rows (the number of rows returned). The following utility lists the query optimization plan(s) for one or more queries to a text file. infile A file pathname to a text file containing a listing of cached queries. Specified as a quoted string. outfile A file pathname where query optimization plans are to be listed. Specified as a quoted string. If the file does not exist, the system creates it. If the file already exists, Caché overwrites it. eos Optional — The end-of-statement delimiter used to separate the individual cached queries in the infile listing. Specified as a quoted string. The default is “GO”. If this eos string does not match the cached query separator, no outfile is generated. schemapath Optional — A comma-separated list of schema names that specifies a schema search path for unqualified table names, view names, or stored procedure names. Can include DEFAULT_SCHEMA, the current system-wide default schema. If infile contains #Import directives, QOPlanner adds these #Import package/schema names to the end of schemapath. The following is an example of evoking this query optimization plans listing utility. This utility takes as input the file generated by the ExportSQL^%qarDDLExport() utility, as described in “Listing Cached Queries to a File” section of the “Cached Queries” chapter. You can either generate this query listing file, or write a query (or queries) to a text file. You can use the query optimization plan text files to compare generated optimization plans using different variants of a query, or compare optimization plans between different versions of Caché. This #Import statement tells the QOPlanner utility what default package/schema to use for the plan generation of the query. When exporting the SQL queries from a routine, any #import lines in the routine code prior to the SQL statement will also precede the SQL text in the export file. Queries exported to the text file from cached queries are assumed to contain fully qualified table references; if a table reference in a text file is not fully qualified, the QOPlanner utility uses the system-wide default schema that is defined on the system when QOPlanner is run. The optional %PARALLEL keyword is specified in the FROM clause of a query. It suggests that Caché perform parallel processing of the query, using multiple processors (if applicable). This can significantly improve performance of some queries that uses one or more COUNT, SUM, AVG, MAX, or MIN aggregate functions, and/or a GROUP BY clause, as well as many other types of queries. These are commonly queries that process a large quantity of data and return a small result set. For example, SELECT AVG(SaleAmt) FROM %PARALLEL User.AllSales GROUP BY Region would likely use parallel processing. A “one row” query that specifies only aggregate functions, expressions, and subqueries performs parallel processing, with or without a GROUP BY clause. However, a “multi-row” query that specifies both individual fields and one or more aggregate functions does not perform parallel processing unless it includes a GROUP BY clause. For example, SELECT Name,AVG(Age) FROM %PARALLEL Sample.Person does not perform parallel processing, but SELECT Name,AVG(Age) FROM %PARALLEL Sample.Person GROUP BY Home_State does perform parallel processing. If a query that specifies %PARALLEL is compiled in Runtime mode, all constants are interpreted as being in ODBC format. For further details, refer to the FROM clause in the Caché SQL Reference. Regardless of the presence of the %PARALLEL keyword in the FROM clause, some queries may use linear processing, not parallel processing. Caché makes the decision whether or not to use parallel processing for a query after optimizing that query, applying other query optimization options (if specified). Caché may determine that the optimized form of the query is not suitable for parallel processing, even if the user-specified form of the query would appear to benefit from parallel processing. You can determine if and how Caché has partitioned a query for parallel processing using Show Plan. The query contains the FOR SOME predicate. The query contains both a TOP clause and an ORDER BY clause. This combination of clauses optimizes for fastest time-to-first-row which does not use parallel processing. Adding the FROM clause %NOTOPOPT optimize-option keyword optimizes for fastest retrieval of the complete result set. If the query does not contain an aggregate function, this combination of %PARALLEL and %NOTOPOPT performs parallel processing of the query. A query containing a LEFT OUTER JOIN or INNER JOIN in which the ON clause is not an equality condition. For example, FROM %PARALLEL Sample.Person p LEFT OUTER JOIN Sample.Employee e ON p.dob > e.dob. This occurs because SQL optimization transforms this type of join to a FULL OUTER JOIN. %PARALLEL is ignored for a FULL OUTER JOIN. The %PARALLEL and %INORDER optimizations cannot be used together; if both are specified, %PARALLEL is ignored. COUNT(*) does not use parallel processing if the table has a BITMAPEXTENT index. %PARALLEL is intended for tables using standard data storage definitions. Its use with customized storage formats may not be supported. %PARALLEL is not supported for GLOBAL TEMPORARY tables or tables with extended global reference storage. %PARALLEL is intended for a query that can access all rows of a table, a table defined with row-level security (ROWLEVELSECURITY) cannot perform parallel processing. %PARALLEL is intended for use with data stored in the local database. It does not support global nodes mapped to a remote database. %PARALLEL is ignored when applied to a subquery that includes a complex predicate, or a predicate that optimizes to a complex predicate. Predicates that are considered complex include the %CONTAINS, %CONTAINSTERM, FOR SOME, and FOR SOME %ELEMENT predicates. For parallel processing, Caché supports multiple InterProcess Queues (IPQ). Each IPQ handles a single parallel query. It allows parallel work unit subprocesses to send rows of data back to the main process so the main process does not have to wait for a work unit to complete. This enables parallel queries to return their first row of data as quickly as possible, without waiting for the entire query to complete. It also improves performance of aggregate functions. Note that this formula is not 100% accurate, because a parallel query can spawn sub queries which are also parallel. Therefore it is prudent to allocate more extra gmheap than is specified by this formula. Failing to allocate adequate gmheap results in errors reported to cconsole.log. SQL queries may fail. Other errors may also occur as other subsystems try to allocate gmheap. To review gmheap usage by an instance, including IPQ usage in particular, from the home page of the Management Portal choose System Operation then System Usage, and click the Shared Memory Heap Usage link; see Generic (Shared) Memory Heap Usage in the “Monitoring Caché Using the Management Portal” chapter of the Caché Monitoring Guide for more information. To change the size of the generic memory heap or gmheap (sometimes known as the shared memory heap or SMH), from the home page of the Management Portal choose System Administration then Configuration then Additional Settings then Advanced Memory; see Advanced Memory Settings in the “Caché Additional Configuration Settings” chapter of the Caché Additional Configuration Settings Reference for more information. If you are running a cached SQL query which uses %PARALLEL and while this query is being initialized you do something that purges cached queries, then this query could get a <NOROUTINE> error reported from one of the worker jobs. Typical things that causes cached queries to be purged are calling $SYSTEM.SQL.Purge() or recompiling a class which this query references. Recompiling a class automatically purges any cached queries relating to that class. If this error occurs, running the query again will probably execute successfully. Removing %PARALLEL from the query will avoid any chance of getting this error. An SQL query which uses %PARALLEL can result in multiple SQL Statements. The Plan State for these SQL Statements is Unfrozen/Parallel. A query with a plan state of Unfrozen/Parallel cannot be frozen by user action. Refer to the “SQL Statements” chapter for further details.
2019-04-23T02:42:30Z
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GSQLOPT_optquery
Here are 100 orthopedic and spine physician leaders to know. The physicians on this list were selected for their leadership positions within professional societies, management positions at surgery centers and hospitals and their important contributions to the field of orthopedic and spine surgery. This list is not an endorsement of the physicians selected for inclusion. Todd J. Albert, MD, of Rothman Institute in Philadelphia. Dr. Albert, president of Rothman Institute, is a clear leader in his field. He also serves as the James Edwards professor and chair of the orthopedic surgery department at Jefferson Medical College of Thomas Jefferson University and Thomas Jefferson University Hospital in Philadelphia. Read more about Dr. Todd J. Albert. Gary Alegre, MD, of Alpine Orthopaedic Group in St. Stockton, Calif. After performing over 200 minimally invasive spinal fusion procedures in the past few years, Dr. Alegre instructs experienced physicians around the world on minimally invasive spinal fusion techniques. Most recently, two physicians from Australia traveled to view and learn from Dr. Alegre. Read more about Dr. Gary Alegre. David Altchek, MD, of the Hospital for Special Surgery in New York City. Dr. Altchek is attending orthopedic surgeon and co-chief in the sports medicine and shoulder service at the Hospital for Special Surgery, has pioneered sports medicine while treating top professional athletes from around the country. He started his career early — when he was 10 years old — by following his orthopedic surgeon father during Saturday morning rounds, according to a story published in the Times Herald-Record. Read more about Dr. David Altchek. Neel Anand, MD, of Cedars-Sinai Medical Center in Los Angeles. Dr. Anand is the director of orthopedic spine surgery at Cedars-Sinai Institute for Spinal Disorders at Cedars-Sinai Medical Center. He was previously the director of minimally invasive spinal surgery and director of spine trauma at Cedars-Sinai. Dr. Anand has been a pioneer of minimally invasive spine procedures that treat spinal curvature in adults. He was among the first physician to perform a combination of procedures to correct adult lumbar degenerative scoliosis. Read more about Dr. Neel Anand. James Andrews, MD, of the Andrews Sports Medicine & Orthopaedic Center in Birmingham, Ala. Dr. Andrews is the founder of Andrews Sports Medicine & Orthopaedic Center in Birmingham, Ala. He is also an orthopedic consultant for the Washington Redskins professional football team and medical director for the Tampa Bay Rays professional baseball team. Read more about Dr. James Andrews. Michelle Andrews, MD, of Cincinnati Sportsmedicine and Orthopaedic Center. Dr. Andrews is an orthopedic surgeon with the Cincinnati Sportsmedicine and Orthopaedic Center and has an expertise in sports medicine. She has served on the Board of Trustees for the Women's Sports Foundation and the Women's Basketball Coaches Foundation. She was also invited to the White House by former president Bill Clinton to be honored for her work with the Women's Sports Foundation. Read more about Dr. Michelle Andrews. Vincent Arlet, MD, of the University of Virginia School of Medicine in Charlottesville. Dr. Arlet has a special interest in treating patients with scoliosis. He recently treated a patient using a new database he created, Scolisoft, to show clinical pictures of different types of spine curvatures. Scolisoft is the largest international, online, spinal deformity database and the only database carrying clinical photographs of surgical patients taken before and after surgery. Read more about Dr. Vincent Arlet. H. Brent Bamberger, DO, of Orthopedic Associates of SW Ohio. Dr. Bamberger was recently named president of the American Osteopathic Academy of Orthopedics. He has published several articles on elbow reconstruction and holds surgical instruments patents. He has a professional interest in progressive surgical techniques and is a founding member of Athletic Workshop, a group promoting better training for athletes. Read more about Dr. H. Brent Bamberger. Edward Benzel, MD, of the Cleveland Clinic Spine Institute. As chairman of the Cleveland Clinic Spine Institute and vice president of the department of neurosurgery, Dr. Benzel seems happy to share his wisdom on back problems. In an interview with WebMD on the prevalence of back pain, he said patients should consult a staggered line of defense when confronted with back problems: first, the primary care physician, then an orthopedic surgeon or a neurosurgeon. Read more about Dr. Edward Benzel. Stacey Berner, MD, of Northwestern Hospital in Randallstown, Md. Dr. Berner used the Da Vinci Surgical System to repair the nerves on a 20-year-old man who put his hand through glass while on vacation. By the time he arrived for surgery, one of the man's nerves had been severed, and he had large amounts of scar tissue. Read more about Dr. Stacey Berner. Daniel J. Berry, MD, of the Mayo Clinic in Rochester, Minn. He has earned awards for his clinical research from the Hip Society, the Knee Society, AAHKS and the Orthopedic Research and Education Foundation. His primary area of research is biomechanics and motion analysis. Most recently, his articles have focused on hip and knee arthroplasty, including failed metal-on-metal hip arthroplasties. Read more about Dr. Daniel J. Berry. Thomas Best, MD, of OSU Sports Medicine in Columbus, Ohio. Dr. Best is the co-director of OSU Sports Medicine where his clinical interests include muscle and tendon injuries, osteoarthritis and concussions. He is the current president of the American College of Sports Medicine, which is partnering with several other organizations to form Professionals Against Doping in Sports, a group encouraging commitment among athletes and physicians to drug-free sports. Read more about Dr. Thomas Best. John Bergfeld, MD, of the Cleveland Clinic. Dr. Bergfeld is recognized nationwide as a leader in orthopedic surgery and sports medicine. Dr. Bergfeld served as team physician for the Cleveland Browns football team from 1976-2003 and the Cleveland Cavaliers from 1982-2003 and currently serves as a consultant surgeon for both teams. Read more about Dr. John Bergfeld. Sigurd H. Berven, MD, of the University of California in San Francisco. The field of pediatric scoliosis and spinal disorders has been rapidly expanding with the development of new body braces, the Scoliscore scoliosis detection test and legislation in several states calling for schoolchildren to receive scoliosis screening. As an expert in the field, Dr. Berven recently shared his expertise on the subject with the New York Times. Read more coverage on Dr. Sigurd H. Berven. Todd C. Bonvallet, MD, of The Spine Center in Chattanooga, Tenn. Dr. Bonvallet will review and adjust the quality, availability, safety and appropriateness of the program's medical services. With a growing national emphasis on quality measures and standards of care, Dr. Bonvallet will be responsible for conducting chart and case reviews to ensure compliance with national standards. Read more about Dr. Todd C. Bonvallet. Stephen Burkhart, MD, of The San Antonio Orthopaedic Group. Dr. Burkhart is a leader in shoulder surgery and recently gave the Kessel Lecture at the International Congress on Shoulder and Elbow Surgeons in Edinburgh, Scotland. The lecture, "Expanding the Frontiers of Shoulder Surgery," examined the role of technology for treating shoulder conditions. Read more about Dr. Stephen Burkhart. Charles Bush-Joseph, MD, of Midwest Orthopaedics at Rush. Dr. Bush-Joseph is a respected educator and scholar and sits on the editorial board of several national orthopedic journals, including the American Journal of Sports Medicine. He has been a member of the American Board of Orthopaedic Surgery Sports Medicine Examination Committee since 2005 and has helped to formulate the board exam for orthopedic surgeons and sports medicine physicians. Read more about Dr. Charles Bush-Joseph. John J. Callaghan, MD, of the University of Iowa Hospitals & Clinics in Iowa City. Dr. Callaghan, current president of the American Academy of Orthopaedic Surgeons, has his work cut out for him. Under his leadership in 2010, the AAOS plans to enhance all aspects of patient care, initiate new working relationships with other orthopedic organizations and track success rates for procedures and devices to help improve standard of care. Read more about Dr. John J. Callaghan. Christofer Catterson, MD, of Haywood Sports Medicine in Clyde, N.C. As the team physician for Tuscola High School, Franklin High School and Haywood Christian Academy, Dr. Catterson is a proven leader in youth sports medicine. Recently, Haywood Sports Medicine offered a free pre-participation comprehensive sports medical physical for middle and high school athletes in order to best advise the athletes on how to prevent injuries during their upcoming seasons. Read more about Dr. Christofer Catterson. Michael G. Ciccotti, MD, of Rothman Institute in Philadelphia. Dr. Ciccotti is the director of the division of sports medicine and co-director of sports medicine research at Rothman Institute. He is also the head team physician for the Philadelphia Phillies and an orthopedic consultant to the U.S. Women's National Soccer Team, Philadelphia Flyers and Philadelphia Eagles. Read more about Dr. Michael G. Ciccotti. William G. Clancy, MD, of Andrews Sports Medicine and Orthopaedic Center in Birmingham, Ala. Dr. Clancy has earned recognition among professional athletes for the ACL and PCL reconstruction procedures he developed while practicing medicine at the University of Wisconsin. Most NFL, NBA and NHL players with ACL tears have had the "Clancy Procedure." Read more about Dr. William Clancy. Tyson Cobb, MD, of the Hand Center of Excellence at Orthopedic Specialists in Davenport, Iowa. Dr. Cobb developed a surgical treatment technique that uses a one-inch incision. Dr. Cobb lectures nationally and internationally on minimally invasive surgical techniques. He has special expertise in treatment for arthritis, joint replacement, metabolic bone disease, tumors and sports medicine. Read more about Dr. Tyson Cobb. Brian Cole, MD, of Midwest Orthopaedics at Rush in Chicago. Dr. Cole is a professor in the department of orthopedics with a conjoint appointment in the department of anatomy and cell biology at Rush University Medical Center in Chicago. A specialist in arthroscopic shoulder, elbow and knee surgery, he also serves as the team physician for the Chicago Bulls and co-team physician for the Chicago White Sox. Read more about Dr. Brian Cole. Gordon Cromwell, Jr., MD, of Harrison Medical Center in Bremerton, Wash. Orthopedic and trauma surgeon Dr. Cromwell was recently named chief of staff at Harrison Medical Center. The position may be new, but the location isn't: Dr. Cromwell has been with Harrison for 33 years. During his tenure, he has served as an orthopedic surgeon, chief of Harrison's orthopedic section, chief medical officer and, most recently, assistant chief of staff. Read more about Dr. Gordon Cromwell. Leigh Ann Curl, MD, of MedStar SportsHealth. Dr. Curl has earned many honors during her career in sports medicine and broke through several barriers in a field dominated by men. She is the first female head team orthopedic surgeon for a professional football team, the Baltimore Ravens, and graduated at the top of her class from Johns Hopkins School of Medicine in Baltimore. Read more about Dr. Leigh Ann Curl. Phani K. Dantuluri, MD, of Resurgens Orthopaedics in Georgia. Dr. Dantuluri is a sports medicine surgeon with a special interest in hand and wrist surgery. He is the chief of the division of shoulder surgery at The Philadelphia Hand Center and Resurgens Orthopaedics in Georgia. He regularly diagnoses and treats patients with shoulder, elbow and hand conditions and injuries. Read more about Dr. Phani K. Dantuluri. Tal S. David, MD, of Arthroscopy & Orthopedic Sports Medicine Associates in San Diego. Referred to as "soccer savior" by one of his patients, Dr. David has proven his leadership among orthopedic surgeons by performing ACL reconstructions using the AperFix system. In a story published by the Del Mar Times, Dr. David spoke about the advantages to using the new minimally invasive technology during knee procedures. Read more about Dr. Tal S. David. Robert T. Deveny, MD, of Danbury Hospital in Danbury, Conn. Dr. Deveney has a professional interest in disorders of the hip and knee, minimally invasive total hip and knee replacement and hip resurfacing and was recently featured in a Shoreline Plus article on the topic. In addition to seeing his patients, Dr. Deveney conducts research on topics including hip extension osteotomy for flatback syndrome and inferior hip locations. Read more about Dr. Robert T. Deveny. John Dietz, MD, of Indiana Orthopaedic Hospital in Indianapolis. Dr. Dietz is an integral part of his hospital's efforts to cut costs and achieve superior results in the march towards healthcare reform. In a July interview with Becker's Orthopedic & Spine Review, he discussed cost-reduction strategies that save hospitals money without sacrificing quality care. Read more about Dr. John Dietz. Christopher T. Donaldson, MD, of Western Pennsylvania Orthopedic & Sports Medicine in Johnstown. After committing years to researching and practicing shoulder arthroscopy, Dr. Donaldson recently shared his expertise as an associate instructor to the Arthroscopy Association of North America shoulder arthroscopy master's course at the Orthopaedic Learning Center in Chicago. Read more about Dr. Christopher T. Donaldson. Lawrence D. Dorr, MD, of the Dorr Arthritis Institute at Good Samaritan Hospital in Los Angeles. In addition to providing more than 10,000 hip and knee replacements during his career, Dr. Dorr founded the "Operation Walk," a national campaign promoting orthopedic surgery for underprivileged patients around the world. Read more about Dr. Lawrence D. Dorr. Egon Doppenberg, MD, of NorthShore University HealthSystem in Evanston, Ill. Dr. Doppenberg specializes in the treatment of brain and spine tumors and complex degenerative and traumatic spinal disorders at NorthShore University HealthSystem. An expert on minimally invasive neurosurgical procedures, he also serves as clinical assistant professor of neurosurgery at the University of Chicago Pritzer School of Medicine. Read more about Dr. Egon Doppenberg. Randall Dryer, MD, of the Central Texas Spine Institute in Austin. Randall Dryer, MD, of Central Texas Spine Institute, in Austin, Texas is a leading spine surgeon who regularly performs outpatient spine surgery at Northwest Hills Surgical Hospital, a Surgical Care Affiliates facility. He stands at the cutting edge of research in his field and he was recently part of a group that studied multi-level cervical disc replacement in an ASC setting, which he presented at this year's North American Spine Society annual meeting. Read more about Dr. Randall Dryer. Matthew El-Kadi, MD, of the University of Pittsburgh Passavant. As chief of neurosurgical surgery at the University of Pittsburgh Passavant spine center, Dr. El-Kadi has been instrumental in growing the facility and expanding its services beyond traditional spine surgery. Dr. El-Kadi has a professional interest in treating patients with fusion and minimally invasive spine surgery techniques. Read more about Dr. Matthew El-Kadi. Thomas Einhorn, MD, of Boston University School of Medicine. Dr. Einhorn has recently been in the news for successfully using biologic techniques to heal a patient's broken leg after several other surgeries failed. This procedure is but one of many successful biologic procedures attributed to Dr. Einhorn during his career. Read more about Dr. Thomas Einhorn. Richard G. Fessler, MD, of Northwestern University in Chicago. Dr. Fessler has pioneered several spine surgery techniques, including microendoscopic discectomy and microendoscopic compression of lumbar stenosis. He has also earned a place as a leader in spine surgery by being the first physician in the United States to perform human embryonic spinal cord transplantation. Read more about Dr. Richard G. Fessler. Mark Flood, DO, of Celling Treatment Centers in Austin, Texas. In the field of adult stem cell research and biologic treatment of spinal disorders, Dr. Flood has proven himself a leader by constantly remaining at the cutting-edge of this technology. He first used his knowledge of adult stem cell procedures in Arizona, completing the state's first scoliosis surgery using the technology on a 17-year-old patient. Read more about Dr. Mark Flood. Daniel Garza, MD, of Stanford Hospital & Clinics in California. In order to develop better treatment of football injuries, Dr. Garza is collaborating with the San Francisco 49ers for a study about the biomechanics of football injuries, according to a Stanford Hospital & Clinics news release. Read more about Dr. Daniel Garza. David Geier, MD, of the Medical University of South Carolina in Charleston. As the director of the Medical University of South Carolina's sports medicine program and spokesman for the American Orthopaedic Society for Sports Medicine, Dr. Geier invested a great deal of time and energy into youth sports medicine and injury prevention. Read more about Dr. David Geier. Thomas Graham, MD, of the Curtis National Hand Center at Union Memorial Hospital in Baltimore, Md. Dr. Graham estimates he helped around 1,700 professional athletes. He has been sought out by NBA star Shaquille O'Neal, golfer Anthony Kim and Boston Bruins center David Krejci. Read more about Dr. Thomas Graham. Andrew Gregory, MD, of Vanderbilt Orthopaedics in Nashville. As a child in Huntsville, Ala., Dr. Gregory enjoyed playing soccer and tennis as well as running track. Today, as an orthopedic surgeon and member of the ACSM Youth Sports and Health Committee, Dr. Gregory has a professional interest in treating young athletes. Dr. Gregory serves as an assistant professor of orthopedics and pediatrics as well as director of the primary care sports medicine fellowship at Vanderbilt Orthopedics in Nashville. Read more about Dr. Andrew Gregory. Kevin Gill, MD, of the UT Southwester Spine Center in Dallas. In addition to his work as professor and vice chairman of orthopedic surgery at the University of Texas Southwestern Medical Center at Dallas and co-director of the UT Southwestern Spine Center, Dr. Gill has found the time to make significant contributions to spine surgery procedures through his research focused on degenerative disorders. Read more about Dr. Kevin Gill. Scott Gillogly, MD, of Atlanta Medical Center. As the head team physician and orthopedic surgeon for the Atlanta Thrashers NHL hockey team and the Atlanta Falcons NFL football team, Dr. Gillogly knows how to treat a sports injury — and how one feels. He first became interested in sports medicine and orthopedic surgery as a quarterback for the United States Army at West Point. Read more about Dr. Scott Gillogly. Manish Gupta, MD, of Sports & Orthopedic Center in Boca Raton, Fla. It seems fitting that the homepage of Dr. Gupta's Sports & Orthopedic Center is headed by a link to the surgeon's fantasy football picks. As a board-certified sports medicine and reconstruction specialist, Dr. Gupta has worked with the Baltimore Ravens and the Morgan State University football program, helping mend sports injuries. Read more about Dr. Manish Gupta. Emanuel Haber, MD, of The Foot and Ankle Centre of New Jersey in Paramus. The purchase of the laser for The Foot and Ankle Centre of New Jersey makes Dr. Haber the first podiatrist in northern New Jersey to use a cool laser to treat fungal infections. Traditionally, podiatrists use a heat-based laser to cut through tissue. The cool laser uses light instead, creating a more comfortable sensation for the patient. Read more coverage on Dr. Emanuel Haber. Steven L. Haddad, MD, of the Illinois Bone & Joint Institute in Morton Grove. Dr. Haddad is an orthopedic surgeon, inventor and expert on total ankle replacement. He regularly performs surgery on patients needing uncomplicated total ankle replacement and those with deformity corrections. Dr. Haddad has also shared his expertise by leading design teams focused on creating ankle replacement prostheses for Wright Medical. Read more about Dr. Steven L. Haddad. Christopher D. Harner, MD, of UPMC Center for Sports Medicine. As vice president of the American Orthopaedic Society for Sports Medicine and former board member of the American Academy of Orthopaedic Surgeons, Dr. Harner certainly has a wealth of leadership experience. He is a physician with UPMC Center for Sports Medicine in Pittsburgh and has a professional interest in knee, ligament and cartilage injuries. Read more about Dr. Christopher D. Harner. Richard Hawkins, MD, of the Steadman-Hawkins Clinic of the Carolinas. Dr. Hawkins, who is co-founder of the Steadman-Hawkins Clinic of the Carolinas and formerly The Steadman-Hawkins Clinic in Vail, Colo., has positioned himself as a leader in orthopedic surgery and sports medicine. He has treated professional athletes from around the world, and this past year he took the time to visit Haiti and treat earthquake victims, according to a clinic report. Read more about Dr. Richard Hawkins. Andrew C. Hecht, MD, of Mount Sinai Medical Center in New York City. As the spine surgical consultant for the New York Jets, the co-director of spine surgery and director of the NFL spine center program for retired players at Mount Sinai Medical Center, Dr. Hecht is a busy man. Dr. Hecht's clinical interests include cervical and lumbar spine surgery, minimally invasive spine surgery, microsurgery, spine trauma and tumors. Read more about Dr. Andrew C. Hecht. Stephen H. Hochschuler, MD, the of Texas Back Institute. Dr. Hochschuler has been a leader in spine surgery for several years. In the past, he has served as a director for Alphatech Spine and serves as the chairman of the scientific advisory board for Forbes magazine. Read more about Dr. Stephen H. Hochschuler. Frank Jobe, MD, of The Kerlan-Jobe Clinic in Los Angeles. After a long career in orthopedic surgery, which includes co-founding the Kerlan-Jobe Orthopaedic Clinic, pioneering the Tommy John surgery for major league pitchers with damaged UCLs and training several sports medicine physicians, Dr. Jobe is one of the most recognizable names in orthopedic surgery and sports medicine. Read more about Dr. Frank Jobe. Dean Karahalios, MD, of the NorthShore Neurological Institute at NorthShore University Health Systems in Evanston, Skokie and Vernon Hills, Ill. Dr. Karahalios was recently named to the National Football League Players Association (NFLPA) Second Opinion Network of Neurological Surgeons. The second opinion network allows injured NFL players to have access to physicians who are unencumbered by league politics and economics. Dr. Karahalios was chosen as a representative for the NFLPA Second Opinion Network of Neurological Surgeons for brain and spine injuries for Chicago. Read more about Dr. Dean Karahalios. Spero Karas, MD, of Emory Orthopaedic & Spine Center in Atlanta, Ga. Dr. Karas is director of the Emory Orthopaedic Sports Medicine Fellowship Program. He also serves as consulting team physician for Georgia Tech University, Emory University Athletics and Mount Vernon Presbyterian High School. A recognized authority in the fields of shoulder and knee surgery and sports medicine, Dr. Karas has authored more than 150 publications, presentations and videos and has trained over 50 residents, fellows and graduate students in his subspecialties. Read more about Dr. Spero Karas. Choll Kim, MD, PhD, of the Spine Institute of San Diego. Dr. Kim is a nationally known expert on computer-assisted minimally invasive spine surgery. He shares his expertise with fellow spine surgeons as director of the education lab in the minimally invasive spine center at Alvarado Hospital in San Diego. He uses image guidance and navigation techniques in order to perform spine surgery on complex spinal disorders, spinal stenosis, deformities, traumatic injuries and tumors. Read more about Dr. Choll Kim. Timothy E. Kremchek, MD, of Beacon Orthopaedics & Sports Medicine in Sharonville, Ohio. Dr. Kremchek, co-founder of Beacon Orthopaedics & Sports Medicine, has served many professional athletes as the medical director and chief orthopedic physician for the Cincinnati Reds. He has also been honored for his work as a team physician for local high schools. Read more about Dr. Timothy E. Kremchek. Ezriel Kornel of Brain & Spine Surgeons of New York in White Plains. Dr. Kornel has become an expert in minimally invasive endoscopic surgery of the spine, as well as minimally invasive approaches in the surgical treatment of brain tumors. He was one of the first neurosurgeons in the New York metropolitan area to replace damaged cervical discs with newly introduced artificial discs. Read more about Dr. Ezriel Kornel. Lawrence D. Lemak, MD, of Lemak Sports Medicine in Birmingham, Ala. Dr. Lemak has been a leader in sports medicine through his work to promote sports safety for both professional and youth athletes. He has been a major advocate for standardizing treatment for professional and college athletes and has educated many coaches about sports medicine injuries. Read more about Dr. Lawrence Lemak. Craig Levitz, MD, partner at Orlin & Cohen Orthopedic Group in Rockville Centre, N.Y. Dr. Levitz currently spreads his expertise on knee, shoulder and elbow disorders to future physicians around the world. He is lead faculty for Smith & Nephew's minimally invasive knee and shoulder surgery course and serves as consultant to other major orthopedic companies. Read more about Dr. Craig Levitz. Isador Lieberman, MD, of the Texas Back Institute. Dr. Lieberman has a passion for minimally invasive spine surgery, which has led him to become an internationally recognized leader in the field. He recently co-developed SpineAssist, a robotic tool used to perform minimally invasive spine surgery that is used at the Texas Health Presbyterian Hospital in Plano. Read more about Dr. Isador Lieberman. Kenneth Light, MD, of San Francisco Spine Center. Dr. Light recently became one of the few physicians in the United States to successfully reverse a spinal fusion. Spinal fusions were thought to be permanent procedures, until recently, and many fusions severely limit the patient's range of motion. This was the case for Dr. Light's patient, who also suffered from constant pain before the reversal. Read more about Dr. Kenneth Light. Walter R. Lowe, MD, of the University of Texas Health and Science Center in San Antonio. In addition to his work with professional and university sports teams, Dr. Lowe is the chairman of the department of orthopedics at the University of Texas Health Science Center at Houston and the head of orthopedic surgery at Memorial Hermann-Texas Medical Center and LBJ General Hospital. Finally, Dr. Lowe serves as the medical director of the Memorial Hermann Sports Medicine Institute. Read more about Dr. Walter R. Lowe. Jerry Magone, MD, of Orthopaedic & Sports Medicine Consultants in West Chester, Ohio. Dr. Magone is the president and CEO of Orthopaedic & Sports Medicine Consultants where he has practiced since 1987. Dr. Magone has served on the Quality Improvement Council for AETNA/US Healthcare and on the board of directors of United Health Care Ohio's capitated orthopedic network. Read more about Dr. Jerry Magone. William Maloney, MD, of the Joint Replacement Center in Redwood, Calif. This past year, Dr. Maloney was placed on the AAHKS President's Honor Roll, gold level, by the American Association of Hip and Knee Surgeons and received the Achievement Award for his contributions as a volunteer in orthopedics from the American Academy of Orthopaedic Surgeons and The Otto Aufranc Award from the Hip Society. Read more about Dr. William Maloney. Bert Mandelbaum, MD, a physician with Santa Monica (Calif.) Orthopaedic and Sports Medicine Group. Dr. Mandelbaum established his leadership within sports medicine by heading a development team in creating a warm-up program specifically designed to help female athletes prevent common knee injuries. Female athletes who completed the Prevent Injury and Enhance Performance (PEP) were 1.7 times less likely to have ACL injuries than other female athletes, according to the research. Read more about Dr. Bert Mandelbaum. David Martin, MD, of Wake Forest University in Winston-Salem, N.C. A noted expert on sports medicine, Dr. Martin specializes in arthroscopy of the shoulder and knee, trauma and sports medicine of the upper extremity, foot and ankle surgery, rehabilitation and total joint replacement. He was named to the 2009-10 list of Best Doctors in America by Boston-based rating company Best Doctors. Read more about Dr. David Martin. Joseph C. McCarthy, MD, of Massachusetts General Hospital in Boston. Dr. McCarthy was recently named the president of the International Society of Hip Arthroscopy. He is the vice chairman of the department of orthopedics at Massachusetts General Hospital in Boston and director of the Kaplan Center for Joint Reconstructive Surgery at the Newton Wellesley Hospital. His professional interests include total joint arthroplasty and hip arthroscopy. Read more about Dr. Joseph C. McCarthy. Stephen M. McCollam, MD, of Peachtree Orthopaedic Clinic in Atlanta. Dr. McCollam currently serves as president of Atlanta's Peachtree Orthopaedic Clinic, where he has been a hand surgeon for more than 20 years. His leadership in orthopedics extends well beyond the U.S. borders, as he has volunteered his time at the Hospital Albert Schweitzer in Haiti for the past two decades. Read more about Dr. Stephen M. McCollam. Jeffrey R. McConnell, MD, of OAA Orthopaedic Specialists' Pennsylvania Spine and Scoliosis Institute. Dr. McConnell is one of the leaders of "Operation Straight Spine," a charitable mission project that provides spinal treatment for the underserved population in Kolkata, India. He is trained in Extreme Lateral Interbody Fusion and cervical total disc arthroplasty. He has been active in researching these areas and was the lead investigator for the SECURE-C cervical disc replacement FDA IDE clinical trial. Read more about Dr. Jeffrey R. McConnell. Gary Michelson, MD, of Los Angeles. In addition to his leadership as a spine surgeon for more than 25 years, Dr. Michelson has proven his status as an integral member of the spine community by inventing comprehensive spinal surgical systems which have become the foundation of several modern treatment options for spinal disorders. Read more about Dr. Gary Michelson. Peter Millett, MD, of The Steadman Clinic in Vail, Colo. Dr. Millett is a partner at The Steadman Clinic. His practice focuses on treating athletes with shoulder injuries, and he has expertise in treating revision shoulder surgery. He has recently been elected to serve on the National Steering Committee for the Sports Trauma and Overuse Prevention campaign, an endeavor supported by the American Academy of Orthopaedic Surgeons. Read more about Dr. Peter Millett. Allan Mishra, MD, of Total Tendon in Stanford, Calif. Since Dr. Mishra published his first article supporting the use of platelet rich plasma for chronic tendonitis, which has become an increasingly popular treatment method for patients with tendonitis. Additional studies Dr. Mishra has published include supporting the use of PRP for tennis elbow, Achilles tendon repair and arthroscopic rotator cuff repair. Read more about Dr. Allan Mishra. Mick Perez-Cruet, MD, of Michigan Head & Spine Institute at the Providence Medical Center in Southfield, Mich. Dr. Perez-Cruet is a pioneer in minimally invasive spine surgery and recently treated a patient in June, helping her return to her golf game by August. He treated her collapsed disc by combining a decompression for stenosis with a spinal fusion. Read more about Dr. Mick Perez-Cruet. Jeff Pierson, MD, of St. Francis Hospital-Mooresville, Ind. Dr. Pierson was part of a team that developed a system for decreasing blood loss during surgery. In addition to blood conservation with total joint replacement, his research interests include rehabilitation and recovery after total joint replacement. Dr. Pierson currently practices at the St. Vincent Orthopedic Center and with Northcentral Indiana Orthopedics in Logansport, Ind. Read more about Dr. Jeff Pierson. Kevin Plancher, MD, of Plancher Orthopaedics & Sports Medicine. Dr. Plancher is a sports medicine physician. In addition to his clinical work, Dr. Plancher is on the editorial review board of the Journal of American Academy of Orthopaedic Surgeons, the American Journal of Medicine and Sports and the American Journal of Orthopedics. Read more about Dr. Kevin Plancher. William G. Pujadas, MD, of Jacksonville Orthopaedic Institute in Jacksonville, Fla. Dr. Pujadas is one of the founding partners of the Jacksonville (Fla.) Orthopaedic Institute, where he continues to practice from the San Marco location. He is also a member of the Baptist Center for Joint Replacement at Baptist Medical Center, where he participates in the Bloodless Medicine and Surgery Program. Read more about Dr. William G. Pujadas. Bernard Rawlins, MD, of the Hospital for Special Surgery in New York City. Dr. Rawlins is experienced in treating all spinal disorders from the cervical spine to scoliosis in adult and pediatric patients. His expertise in treating so many different conditions is supplemented by his research interests, including spine biomechanics, gene-mediated spine fusion and innovative surgical techniques. Read more about Dr. Bernard Rawlins. Arthur Rettig, MD, of Methodist Sports Medicine/The Orthopedic Specialists in Indianapolis. Dr. Rettig has earned several honors for his work as team physician for various Indiana teams. This past year, the Indiana Football Coaches Association inducted Dr. Rettig into its Hall of Fame. Read more about Dr. Arthur Rettig. Mark Reiley, MD, of Berkeley (Calif.) Orthopaedic Group. Dr. Reiley is the acclaimed developer of kyphoplasty and the founder of Kyphon, a company that aims to restore spinal function through minimally invasive therapies. Dr. Reiley also developed Inbone, a total ankle replacement technology that Robert Anderson, MD, said "will be successful for a long time" in an interview with The Saturday Evening Post. Read more about Dr. Mark Reiley. Garrison Rolle, MD of Sacred Heart Hospital on the Gulf in Pensacola, Fla. When Sacred Heart Hospital on the Gulf teamed up with Tallahassee (Fla.) Orthopedic Clinic to launch orthopedic services, Dr. Rolle was the first to perform orthopedic surgery at the new hospital on July 9. He performed arthroscopic knee repair on Arion Ward, a Port St. Joe (Fla.) High School student who hurt his knee playing basketball over the summer. Read more about Dr. Garrison Rolle. Anthony Romeo, MD, of Midwest Orthopaedics at Rush in Chicago. Dr. Romeo is a sports medicine, elbow and shoulder surgeon at Midwest Orthoapedics at Rush. In his practice, he uses arthroscopic techniques to treat conditions such as rotator cuff disease, shoulder instability and elbow stiffness. He is one of the few surgeons in the country who routinely performs rotator cuff repairs and revisions using all arthroscopic techniques. Read more about Dr. Anthony Romeo. Alan Rosen, MD, of KSF Orthopaedic Center in Houston. Dr. Rosen is taking the lead in innovative treatment for orthopedic procedures by being one of the first physicians in his area to treat patients with PRP. He has a professional interest in treating upper extremity conditions and holds a certificate of added qualifications in hand surgery. Read more about Dr. Alan Rosen. Richard Rothman, MD, of Rothman Institute in Philadelphia. Dr. Rothman is the founder of Rothman Institute in Philadelphia, has experienced success in growing his orthopedic practice to encompass 14 locations in the Delaware Valley area over the past few years. He originally founded Rothman Institute in 1970 with the goal of using the most advanced technology available to treat his patients. Read more about Dr. Richard Rothman. Felix "Buddy" H. Savoie, III, MD, of Tulane (La.) University School of Medicine. Dr. Savoie has become an accomplished author and teacher for surgeons from around the world. He has a professional interest in shoulder, elbow and wrist surgery. He has also given several presentations for trained physicians, including a live broadcast of rotator cuff and labrum repair to an audience of over 400 orthopedic surgeons. The broadcast was part of the annual San Diego Shoulder Arthroscopy meeting. Read more about Dr. Felix "Buddy" H. Savoie, III. Mark Schickendantz, MD, of Cleveland Clinic. As head team physician for the Cleveland Indians professional baseball team and the Cleveland Browns professional football team, Dr. Schickendantz has treated his fair share of elbow and shoulder problems. He has performed arthroscopic surgery on a number of professional baseball players and recently evaluated an elbow strain for Miami Heat basketball player LeBron James. Read more about Dr. Mark Schickendantz. James D. Schwender, MD, of the Twin Cities Spine Center in Minneapolis. Dr. Schwender is current president of the Society for Minimally Invasive Spine Surgery. Dr. Schwender has made significant contributions to spine surgery through his research and instructional courses on several topics, including minimally invasive surgery and pediatric spine fixation. Read more about Dr. James D. Schwender. Thomas P. Sculco, MD, of the Hospital for Special Surgery. Dr. Sculco became the surgeon-in-chief of a top hospital orthopedic program when the U.S. News & World Report ranked the Hospital of Special Surgery in New York City as number one in orthopedics this past July. Dr. Sculco is also chairman of the department of orthopedic surgery and professor of orthopedic surgery at Weill Cornell Medical College in New York City. Read more about Dr. Thomas P. Sculco. Clarence L. Shields, Jr., MD, of the Kerlan-Jobe Orthopaedic Clinic in Los Angeles. Dr. Shields has served as a sports medicine leader in many capacities throughout his career. He is a past president of the American Orthopaedic Society for Sports Medicine and received the AOSSM "Mr. Sportsmedicine" award in 2006. Read more about Dr. Clarence L. Shields, Jr. James St. Louis, DO, of the Laser Spine Institute in Tampa, Fla. Dr. St. Louis is committed to providing his patients with the best treatment possible for spinal injuries and disorders. He has consistently been on the cutting edge of minimally invasive spine surgery techniques and has trained many young physicians on innovative surgical techniques throughout his career. Read more about Dr. James St. Louis. Richard Steadman, MD, of The Steadman Clinic in Vail, Colo. Dr. Steadman has treated patients and professional athletes for more than two decades at The Steadman Clinic, which he co-founded. Dr. Steadman focuses on treatment for knee injuries and has treated famous patients such as Ryan Sweeny of the Oakland Athletics and Owen Hargreaves of Manchester United. Read more about Dr. Richard Steadman. Marshall Steele, MD, of Marshall | Steele. As practices gear up for the changes to be instituted by the healthcare reform law, many are wondering what the future of healthcare will look like — who will lose money, who will make money and who will be forced to change their current practices. In May, Dr. Steele shared with Becker's ASC Review his predictions for healthcare reform's impact on providers. Read more about Dr. Marshall Steele. Michael J. Stuart, MD, of Mayo Clinic in Rochester, Minn. Dr. Stuart is vice chair of orthopedic surgery and co-director of the sports medicine center at Mayo Clinic in Rochester, Minn. He helped establish the sports medicine program in 1990 and has a professional interest in the lower extremity biomechanics. He is also involved in forwarding unicompartmental and total knee arthroplasty, proximal tibial osteotomy and meniscus and osteochondral allograft transplantation. Read more about Dr. Michael J. Stuart. John R. Tongue, MD, a private-practice orthopedic surgeon in Tualatin, Ore. Dr. Tongue has been a leader in orthopedic surgery for many years and currently serves as the second vice president of the American Academy of Orthopaedic Surgeons. He has a professional interest in arthroscopic surgery, joint replacement and sports medicine. Read more about Dr. John R. Tongue. Alexander Vaccaro, MD, of the Rothman Institute in Philadelphia. As the recipient of the 2010 Leon Wiltse Award for excellence in leadership and clinical research from the North American Spine Society, Dr. Vaccaro is poised to continue developing and performing such innovative surgery. His research interests include the timing of surgery after traumatic spinal cord injury, using alternative bone graft substitutes in spinal surgery and developing spinal implants for traumatic and degenerative spinal disorders. Read more about Dr. Alexander Vaccaro. Russell Warren, MD, of the Hospital for Special Surgery in New York City. During his career, he has established himself as a leader in sports medicine through his work as a team physician for the New York Giants as well as his clinical and research-based contributions to the field. He is a past president of the American Orthopaedic Society for Sports Medicine and the American Shoulder & Elbow Society. Read more about Dr. Russell Warren. Michael Weiss, DO, of Laser Spine Institute in Scottsdale, Ariz. Dr. Weiss is the chief spine surgeon at Laser Spine Institute Scottsdale, where he treats his patients who have conditions such as spinal arthritis, pinched nerves, bulging discs, herniated discs and sciatic nerve compression. He has also served as the president of the Broward Orthopedic Society and is a fellow with the American Osteopathic Academy of Orthopedics. Read more about Dr. Michael Weiss. Leo Whiteside, MD, of Missouri Bone & Joint Center in St. Louis. Dr. Whiteside is the founder and president of the board of directors at the Missouri Bone & Joint Center in St. Louis, is a leader in hip and knee surgery. He invented three total knee prostheses, two unicompartmental knee prostheses, three total hip prostheses and other surgical instruments. Read more about Dr. Leo Whiteside. Austin Yeargan, MD, of North Carolina Shoulder and Elbow Surgery. As physicians continue to explore biologic approaches to treating orthopedic patients, Austin Yeargan, MD, a physician with North Carolina Shoulder and Elbow Surgery, is at the cutting-edge of this technology. Dr. Yeargan recently harvested adult stem cells from bone marrow from a patient's hip to repair the patient's rotator cuff, according to a Star News report. Read more about Dr. Austin Yeargan. Lewis Yocum, MD, of Kerlan-Jobe Orthopaedic Clinic in Los Angeles. Recently hailed as the possible new "King of Sports Medicine" by Forbes Magazine, Dr. Yocum has advised and operated on several professional athletes. He has special training in performing ulnar collateral ligament reconstruction, also known as Tommy John surgery, a procedure developed and pioneered by Dr. Yocum's colleague, Frank Jobe, MD. Read more about Dr. Lewis Yocum. Erik N. Zeegen, MD, of the Valley Hip & Knee Institute in Tarzana, Calif. Dr. Zeegen learned the important role surgeons play in their patients' lives while accompanying his father, an ophthalmologist, during his Saturday morning rounds. He now uses minimally invasive techniques to perform anterior hip replacement and unicompartmental knee replacement. Read more about Dr. Erik N. Zeegen. Christian Zimmerman, MD, of Idaho Neurological Institute in Boise. Dr. Zimmerman currently serves as chairman and medical director of Idaho Neurological Institute and co-chairman of Spinal Medicine Institute at St. Alphonsus Regional Medical Center in Boise, Idaho. He co-founded the Idaho Neurological Institute in 1993. Read more about Dr. Christian Zimmerman.
2019-04-20T22:14:09Z
https://www.beckersspine.com/orthopedic-spine-industry-leaders/item/2808-100-orthopedic-and-spine-physician-leaders
While “Software Developer” is only #4 in salary.com’s 8 hottest jobs of 2014 list in terms of growth rate (demand), it probably goes without saying that there are many well-paying career opportunities in Computer Science and IT (Information Technology) in general. If you’re considering pursuing a computer science career, or just curious, here is a list of 50 of the top-paying jobs in the field. While salaries for some roles vary widely by location, industry, experience level, demand and sometimes as the wind blows, this list should give you a rough idea of the more financially rewarding IT-related roles. Not all of the following roles are purely technical, although all are considered to be in the IT field in general or relevant to IT. In the interest of presenting as many different types of widely-achievable roles as possible for the average candidate, we’ve left out upper-level IT management positions such as CTO, VP and Director roles. In some cases, where job titles are merely different designations based on experience, we’ve grouped titles into one listing. E.g., we’ve made no distinction in entries between junior, intermediate, senior, and lead positions of the same type of role. So listed salary ranges usually cover all such variations. Salary ranges are a composite from different sources and should only be considered as a guideline. This role is sometimes referred to as a Computer Systems Analyst, with duties that might overlap that of an IT Project Manager, if overseeing installation or upgrade of computer systems. This role typically analyzes an organization’s computer systems and procedures; makes recommendations for process improvement; interacts with partners/ vendors and with programmers or programmer / analysts. Educational background might be technical, though this is more of an analytical than technical role that is focused on the business aspects of technology, including: analyzing the cost of system changes; the impact on employees; potential project timelines. Needs to interact with department managers on IT requirements; incorporate feedback from both internal and external users into business requirements documents; incorporate feedback from designers; contribute technical requirements; advise technical teams on their and their technology’s role in the organization; provide guidance to programmer / developers with use cases. This role focuses on specific computer systems – compared to a Business Analyst, who will analyze a broader range of processes and systems for an organization. Typically, it requires analytical skills and is business-focused, so often requires a BA background, not necessarily a B.Sc.. It does, however, require an understanding of computer systems and information, and more technical reporting and documentation procedures. Usually, understanding the SDLC (Software Development Life Cycle), UML (Unified Modeling Language) and other technical concepts and skills are often a requirement. The role has optional certifications which bring increased opportunities and compensation. Aka CRM Analyst. CRM = Customer Relationship Management: front office functionality. This is typically a less technical role which may require a marketing or business degree — often a master’s — combined with statistics, but usually requires certain technical knowledge such as database and CRM (Customer Relationship Management) software, specifically. Typical responsibilities / skills: analyze customer relationship data – especially within product channels – using CRM software, and recommend strategy changes for building customer loyalty; define organizational procedures based on the data; document new procedures for internal use — typically for staff in sales, marketing and support. Typical responsibilities / skills: development and upgrade of computer systems; either interact with data and system security staff or define necessary procedures for them to follow; design, develop and test software when necessary — often middleware; document procedures for internal use, and provide various system and operations documents; participate in various review meetings, including design, program and test reviews with inter-departmental co-workers; define a process for change management. A Solutions Architect role is similar to other architect roles and can go beyond the scope of IT. experience with hardware and software systems is common requirement, as is an understanding of business operations. This role is sometimes but not always synonymous with a Director or CTO (Chief Technical Officer) position. Typical responsibilities/ skills; understand the SDLC (Software Development Life Cycle); have broad technical knowledge of computer systems; conduct process flow analyses; transform business/ customer requirements into technical requirements (functional design document); understand and have experience with databases; interact with developers and bridge different IT architect roles. Aka E-Commerce Business Analyst. Backgrounds for this role vary: computer science, finance, statistics, management, marketing, communications. While a bachelor’s degree is standard, a master’s degree is sometimes required. Typical responsibilities / skills: analyze customer e-commerce data for behavioral or other trends; setup or configure reporting or dashboards for easy internal access to such data; create customer profiles for demographic targeting; utilize Web analytics. An ERP (Enterprise Resource Planning) Business Analyst focuses on “back office” functionality for an organization’s various facets, including CRM, management, accounting, sales. Typical responsibilities / skills: have an understanding of typical business uses of ERP software; interact with various stakeholders to analyze business processes and gather requirements; incorporate business requirements to configure ERP software; interact with developers to build a reporting environment; document organization-specific customizations; conduct any necessary training sessions for use of ERP software and reporting environment. This role usually requires experience with a specific ERP solution. Similar titles include Pre-sales Engineer, PreSales Engineer, Pre-sales Technical Engineer. This role is for a product advocate/ evangelist who works with internal sales staff and possibly offers technical consulting to potential customers prior to a sale. They give product demonstrations to sales staff and potential customers and handle the technical aspects of RFIs / RFPs (Requests for Information / Requests for Proposal). So the ability to communicate with both technical and no-technical staff and customers is important, especially to pass on customer requirements to Product Managers. It requires some level of technical knowledge, especially about the systems/ software being offered, and may require some certifications. Post-sales interaction with a client is a possibility. AKA CRM Developer. Most CRM (Customer Relationship Management) software has both internal and external (Web) components. Users can be internal (sales staff, support, admin, systems developers) and external (customers, vendors, partners, researchers). These are the users a CRM Developer needs to keep in mind when developing solutions. Typical responsibilities/ skills: experience with a specific CRM system; custom configure a CRM used by the organization; develop custom modules to extend CRM functionality; integrate CRM features into an organization’s own computer systems, including for customer use – which requires experience with a programming or scripting language, and either server, desktop/ laptop, Web or mobile (phone, tablet) development experience as necessary; document custom settings, modules and features for different levels of user. This role is focused on Web portals and often requires knowledge of a specific portal software platforms. E.g., IBM WebSphere, Microsoft SharePoint. Typical responsibilities / skills: interact with Web and other systems administrators; create or oversee creation of necessary portal databases and user profiles; configure and manage portal applications; perform configuration and upgrade process tests; oversee integration of new technologies into the portal; document portal use policies and procedures (internal); handle relevant trouble tickets; train developers, content managers and end-users as necessary. Aka Computer Programmer / Analyst. May have some overlap with a Business Analyst role, such as performing requirements analysis. In some organizations, there is a lot of overlap with a Software Developer role, and in other places, the two roles work together. Typical responsibilities / skills: design of applications from a high level first – such as by using flowcharts or other graphical views — as well as actual coding of software; testing and maintenance. Specific programming language skills influence salary ranges. Sometimes referred to as a Network Support Engineer. The role sometimes overlaps with Network Architect roles. Typical responsibilities / skills: work with a variety of types of networks including LANs, WANs, GANs and MANs; determine network capacity requirements and ensure that the infrastructure can handle it; monitor and administrate the network; troubleshoot problems. Depending on the size of the organization, a person in this role might also setup, install and configure all types of hardware, from servers and printers to desktops and laptops, routers, switches, support internal network users. Non-standard work hours are a possibility. Typical responsibilities / skills: analyze wireless networking and communication requirements; design and develop network infrastructure; capacity planning; recommend system improvements; document necessary processes; develop any necessary software such as drivers; monitor systems use and performance; setup and run wireless network tests. A senior position might lead a team of junior and intermediate engineers. This role is primarily focused on focused on disaster recovery after a crisis with computer systems. Typical responsibilities / skills: develop strategies for disaster prevention and for resuming operations; ensure backup of data for the organization (process-wise); design and implement computer systems that will support continuous operations; interact with vendors when necessary; design and test recovery plans; report risk potential to senior management. The role may require risk management experience and knowledge of specific 3rd-party systems/ applications. Aka DBA. Sometimes has overlapping duties with Database Programmer, Database Analyst and Database Modeller, and may report to a Database Manager and/or Data Architect. Typical responsibilities / skills: maintain an organization’s databases; design and implement databases, in coordination with a Data Architect; schedule and run regular database backups; recover lost data; implement and monitor database security; ensure data integrity; identify the needs of users and provide access to data stakeholders, data analysts and other users, as necessary. DBAs can have broad or specialized duties. E.g., divide tasks up: System DBA upgrades software for bug fixes and new features. Application DBA writes and maintains code and queries for one or more databases in an organization. The role may require certification. Typical responsibilities / skills: produce the overall design of new software or modules based on requirements passed down; produce flowcharts, algorithms and anything else necessary for the actual coding. Junior developers might start out by maintaining (debugging) existing code / features rather than design new code. If code in an organization is not done separately by Computer Programmers, then it falls to the Software Developers — who might also do testing and debugging, or work with teammates who do that work. Typical responsibilities / skills: identify telecom needs for an organization, including voicemail; create policies for the installation and maintenance of telecom equipment and systems within an organization; take into account any compliance needs, especially for a publicly-traded company; oversee actual installation and maintenance of equipment (cabling, modems, routers, servers, software); manage of a team of telecom/ networking specialists; stay abreast of new telecom technologies for upgrade consideration; interact with vendors as necessary. Typical responsibilities / skills: understand the SDLC (Software Development Life Cycle); interact with business teams to understand requirements; analyze technical problems in ERP configurations and assess risk; write any necessary code for extending an ERP platform’s features, or to integrate with an organization’s applications. This position usually requires experience with a particular ERP solution and with one or more facets/ modules. Depending on company size, this role may overlap with Network Administrator. A Network Manager has overall responsibility for an organizations networks; ensures that networks are always running, especially if customers and/or partners rely on them; devises and implements a plan to either prevent or recover from a disaster. Overall, they are responsible for all the networks, local and non, that drive an organization, and for maintaining the hardware and cabling that goes with the networking infrastructure. That includes installing hardware and software, monitoring networks, etc., or managing a team of Network Analyst/ Engineers and/or the various Network Administrators. Certification may be required for some roles, depending on the networking technology used and especially if the role is significantly hands-on. Typical responsibilities / skills: implement and follow a network security plan; document the networking infrastructure, including any firewall protocols and policies, monitoring and disaster recovery plans; use vulnerability assessment tools to determine potential risks; monitor and investigate security breaches; recommend organizational security policies; keep up to date on changing networking technology, and review software and hardware to be able to recommend upgrades when necessary. This role may overlap with Application Development Manager. This is a fairly technical role and sometimes requires a background as an application developer. An App Dev PM needs the ability to interact with co-workers from multiple departments, to keep them on track to achieve milestones, drive a project forward and resolve bottlenecks. They understand the SDLC (Software Development Life Cycle), budgets, project management principles, basic psychology of motivating people. This role sometimes requires industry knowledge and solutions, e.g., Financial software. Overlaps with others administrator roles. Security administrators oversee access to an organization’s computer systems, whether by internal or external users. Typical responsibilities / skills: develop and configure automated solutions for granting user access rights; oversee internal/ external user access rights manually when necessary; have knowledge of traditional and leading-edge security techniques and tools; understand security auditing procedures; determine security risks; investigate security breaches. This position may require knowledge of specific security-related software and applications. A data warehouse is a repository that combines data from several sources, internal and external, within an organization – e.g., sales and marketing – and is used for trend reporting. Typical responsibilities / skills for a Data Warehouse Developer: interact with business analysts to understand the necessary business logic; follow standards and procedures for databases set down by a Data Warehouse Manager; design and create databases for the purpose of data warehousing; design and run ETL (Extract, Transform, Load) procedures to extract external data and load into a data warehouse; test integrity of data warehouse; write and maintain any code necessary for data warehousing tasks including report generators. This position may require experience with specific 3rd-party applications, and often overlaps with Database Developer duties. This role sometimes overlaps with Database Engineer or Data Warehouse Developer (see above job description) and can cover a broad range of tasks. Typical responsibilities / skills: data management and administration, data modeling, data warehousing, investigate data integrity issues; devise and conduct data tests for integrity, and follow an action plan for any necessary recovery; document access of specific databases for developers in other departments; work with logical and physical models of data; understand principles of distributed data, data redundancy; incorporate database updates as per stakeholder requirements; produce reports on analyzed business intelligence data; write database queries and complementary computer code to support internal applications, and which are possibly shared with developers in other departments. This may require knowing “back end” programming or scripting languages such as Java, Ruby, Python, Perl, etc., as well as knowledge of both traditional RDBMSes (Relational Database Management Systems) and newer NoSQL databases such as Cassandra, CouchDB, MongoDB, Hadoop and others. The role tends to require specific commercial database system experience, experience with database performance tuning and troubleshooting, and may require some forms of certification. Typical responsibilities / skills: creates the Conceptual Data Model representing an organization’s data requirements for various business processes; produces the plan for building the Logical Data Model(s) from the conceptual model. (The physical data model is the actual implementation (database) where data will be stored.) Data modeling (aka database modeling) covers business requirements for databases and is an organic process, so this role also requires adapting a database to business requirements changes. For an IT project, this overlaps with Computer and Information Systems Manager. This is a broader role than for an Applications Development Project Manager and may not require as much of a technical background. Project Managers should have at least an understanding of — if not experience with — the computer systems or software being built / maintained. Responsible for defining, maintaining, and enforcing a project schedule; updating schedule when requirements change or project facets become overdue; keep projects on or under-budget. Other responsibilities and requirements: understand Agile development process (where necessary); run scrums; interact with multiple departments and many levels of co-workers, and convey to them the importance of their respective stakes while also keeping technical resources such as developers goal-oriented; update management on the status of projects, bottlenecks, requests for resources. IT Certifications such as PMP (Project Management Professional) can increase opportunities and salary. Aka Software Product Manager. Usually “owns” (the development and maintenance of) one or more software products / applications / platforms within an organization; works with marketing, UX / design, developers, project managers, etc., in a largely cross-departmental role. Other requirements and responsibilities: be an evangelist for the product — internally and externally where appropriate; research the market and understand what the user wants — either in terms of improvements or new features; be an influential personality and possibly have an entrepreneurial mindset; be outward-facing and understand both customer needs and strategies for acquiring customers; have broad knowledge of relevant from products from various disciplines, not necessarily deep knowledge of one discipline. Typical responsibilities / skills: determine security risks for an organization’s computer systems, databases and networks; monitor external activity; install and configure security-related software (firewalls, encryption); understand compliance issues related to security, especially for a publicly-traded organization; make recommendations to management for security policies and procedures; design and run penetration testing (simulation of attacks); keep abreast of new attack techniques and implement means of preventing these. Aka Application Developer. Typical responsibilities / skills: focus might be on middleware applications; interact with business analysts to understand and incorporate customer and business requirements; understand the SDLC (System Development Life Cycle); follow design specs and programming standards for coding applications; develop and test application-specific software and modules; interact with quality assurance specialists. Possibly requires experience with multi-tier environments. Requires an understanding of specific programming/ scripting languages and development frameworks, and possibly specific database packages. Aka Help Desk Technicians. The focus of a support tech’s work is interacting with non-IT users, whether internal to a company or external. One group of technicians may support internal users of 3rd-party software, while another group may support internal and external users of company software. Responsibilities include being familiar with the software, hardware or systems they support, including keeping up to date with both new and retired features; knowing where to find the answers to questions that come in to the help desk; possibly contribute to a repository of FAQs (frequently asked questions). Aka, User Experience Design Manager. Typical responsibilities / skills: oversee the user experience for an application or portfolio of applications; interact with marketing/ business, technical and other departments to collect requirements and make recommendations; interact with product owner/ manager (sometimes UX owns the product); interact with technical managers, project manager, executive management; manage a team of UX Designers – hiring, management, resource planning, mentoring. This not always a strictly a technical role, though such a manager might have a background that combines management, interface design and coding — or at least be tech-savvy enough to understand what is and is not possible for an organization’s software products. This is a general technical manager role and in some organizations, this title can incorporate other managerial duties including overseeing networks, managing network engineers, databases, database analysts and developers and more. Typical responsibilities / skills: manage help desk/ technical support teams for both internal and external users; budget for support staff equipment and software; be involved in corporate plans for hardware and software upgrades; define service call procedures and policies and monitor employee behavior on calls; ensure the updating of relevant documentation. The role usually requires industry-related technical experience and can require physical effort. This role usually requires a technical background and leads a technical team, which could consist of developers, testers, analysts and more – whether or not the organization is technical. Typical responsibilities / skills: oversee the technical aspects of internal projects; maintain corporate IT procedures, with documentation; hire and lead a technical team to support the procedures; manage resources within a budget; keep up to date with new technologies, for recommending possible internal upgrades; interact with various departments, vendors and possibly consultants /contractors. The role can require a master’s degree in computer science or a related field. This is not always a purely technical role, though background could be and often is in computer science or a similar field. Usually, it’s a business-focused role that analyzes and reports on data used within the organization. Reports are a key part of such a role and are targeted for executives who will make business decisions upon the recommendations. This could be IT process improvement, software and hardware upgrades, networking, etc. Typical responsibilities / skills: collection and analysis of business data for process improvement, similar to “continuous improvement” philosophy; ability to express technical topics in a form non-technical decision makers can absorb; ability to structure business intelligence for internally-defined purposes. The role can require an understanding of a specific software, particularly database systems, and may involved working closely with developers. This covers multiple related roles which require knowledge of at least one mobile operating system and development platform, such as Android or iOS, and the underlying programming languages. In some roles, Mobile Web development skills are a requirement. Typical responsibilities / skills: design, write and maintain mobile application code; port features for an app from another platform (such as desktop, Web, phone, tablet, wearable computing) to the mobile platform in question; integrate databases (internal) and REST APIs (internal and external); produce API components as necessary and document usage for other developers (internal and sometimes external); devise and run code tests in simulator or hardware; work with Quality Assurance staff for additional; testing log and fix defects. The role can sometimes require design skills for a front-end position. Aka IT Auditor, Information Systems Auditor. Typical responsibilities / skills: reviewing and recommending compliance processes, especially for a publicly-traded company; determine and assess risk pertaining to technology, both for a single location and other corporate offices; audit an organization’s computer systems and infrastructure for secureness; comply with company audit policies (e.g., if in a divisional office); draft a security breech prevention plan; define audit procedures; report audit findings. This role is more likely to require a background in MIS (Management Information Science) or business administration, although IT skills are valuable. Software Quality Assurance (SQA) work is on of those unusual sets of roles where compensation varies widely. Companies that appreciate the value of proper testing and “code coverage” pay more for a good Software QA Analyst/ Engineer than they might for a Software Developer / Software Engineer, and thus often require a seasoned developer/ engineer. Other companies pay less much less and tend to employe QA testers — although both variations are sometimes referred to as a Software QA Engineer. In QA work, these are overlapping roles. The tester role is focused on running pre-defined test suites and verifying the results, reporting bugs or interacting with Software Developers/ Engineers. A QA Analyst / Engineer is more like to be the person designing test suites and improving code coverage to verify that everything that needs to be tested is being tested. The latter role can require experience with programming/ scripting languages and/or Web or Mobile platforms. Aka DBA Manager; has a role that overlaps with other database specialists. Typical responsibilities / skills: oversee how data assets are managed within a company, including data organization and access: internally-generated private and public data, as well as externally-created (user) private and public data; data modeling; database design; define and ensure data backup processes; monitor and analyze database performance; troubleshoot data integrity issues; manage a team of other database specialists, including Database Administrators. The role may require an understanding of one or more traditional DBMSes or the newer technologies, as necessary. Aka User Experience Designer. This role comes in various forms: desktop, Web, mobile, wearables. Typical responsibilities / skills: design software interface flow, user interactions, screen layout and organization, screen interaction (between screens), overall appearance (visual design), and optimizes the user experience — typically through iterative improvements and user feedback, to create engaging user experiences; create wireframes or more realistic prototypes — possibly with the help of front-end web developers or a web designer with the necessary development skills; recommend design patterns that are both tested (on other Web sites or apps or desktop software) as well as appropriate to the software at hand; define A/B Split Testing studies to determine which variation of an interface is more engaging. In some companies, UX teams own an application instead of a designated “content owner” and can thus request changes from software developers directly as needed. This is not necessarily a strictly technical role, and is always a creative role that involves an understanding of user psychology. However, it can require technical skills, especially if combined with another role, such as front-end Web development or front-end mobile app development. At the least, an understanding what is and is not possible for a particular software platform is important. Aka Quality Assurance Manager, (S)QA Manager. Typical responsibilities / skills: oversee all IT-related quality assurance efforts within an organization — e.g., the entire application portfolio; manage a team of QA specialists (testers, QA analysts, leads, supervisors); interact with stakeholders; attend high-level project meetings for new/ updated computer systems; budget resources for inter-departmental efforts. Whether or not a QA Manager codes in their role, this position tends to require senior-level QA analyst experience. Depending on the size of an organization, this role can overlap with that of over database specialists. Typical responsibilities / skills: provide a data architecture for an organization’s data assets, including databases, data integration (combining data sources into one view), data access; define the formal data description, structures, models, flow diagrams, and overall metadata; enable stakeholders to manage their portion of the databases or data warehouse, under guidance and data access policies; have logical and physical data modeling skills, whether they’re used in actuality or to oversee a Data Modeler’s efforts; defines data warehouse policies including for Information Assurance. The role usually requires senior experience as a Database Developer/ Analyst / Engineer. Aka, Data Warehouse Manager. Typical responsibilities/ skills: collect and analyze business data from external and internal sources; interact with stakeholders to understand and incorporate business requirements; database modeling, business intelligence skills, data mining, data analysis, reporting; oversee data warehouse integrity; oversee benchmarking of performance; manage a team of Data Warehouse Developer / Analysts. Aka Computer Network Architect. Depending on the size of an organization, this role can overlap with that of other network specialists. Typical responsibilities / skills: design internal and intra-office networks, including physical layout: LAN, WAN, Internet, VoIP, etc.; monitor network usage and performance, devise network tests and evaluate them; incorporate any new business requirements so as to upgrade overal network architecture; do any necessary cabling, routers, and install and configure hardware and software; follow or recommend a budget for projects; choose or recommend the appropriate network components; sometimes report to a CTO (Chief Technology Officer). Network Architects usually have five or more years of experience as a Network Engineer, and supervise various other engineers in implement a networking plan. Besides a Bachelor of Science degree, depending on the employer and the specific role, sometimes an MBA in Information Systems is required as well. Aka Computer Software Engineer. In government positions and some more established corporations, Software Engineer and other IT positions are often divided into Levels indicating experience / rank. Each higher rank incorporates more responsibilities for the role. While there is a theoretical technical difference between a Software Developer and a Software Engineer, many organizations use the term Engineer when they mean Developer. True “software engineers” are certified by an engineering board. While a Software Engineer creates/ tests/ documents software just as a Software Developer does, the former is more likely to also optimize software based on their technical, mathematical and/or scientific knowledge. They produce more reliable software through engineering principles. The salary range listed here covers any use of the title Software Engineer. An Information Systems Security Manager oversees the security of company and customer data and computer systems in general. Typical responsibilities / skills: oversee all IT security needs for an organization; determine security requirements; document security policies; implement security solutions; manage a team of information security specialists. This role tends to require experience with computer or information science or a related field, experience with specific computer systems security software, and may require one or more certifications. Aka Application(s) Development Manager. Typical responsibilities / skills: oversees an organization’s internally-created software applications and platforms; gather application requirements; interface with VP Tech, marketing, project managers, managers of other teams; manage software analysts and/or developers for an organization’s application portfolio; monitor timelines and resources; schedule projects where necessary. This role often requires senior-level experience with developing applications and may require experience with database design. Aka Application Architect. This title is sometimes misused and applied to what would otherwise be a software developer or software engineer position. Typical responsibilities/ skills: broad knowledge of software used within an organization; project management experience; senior-level software development experience; broadly oversee the entire software development (application portfolio) effort for an organization; define application architecture; interact with the various role-specific architects, project manager, customer representatives; interact with developers while enforcing architecture. This might require experience with specific programming languages and software development frameworks.
2019-04-23T17:52:25Z
http://www.gvpcse.co.vu/
Since the death of Colonel Muammar Gaddafi in 2011, Libya has remained in a persistent state of crisis. Western politicians and media have largely failed to understand developments during this period and the nature of the divisions in the country are now such that external observers have repeatedly lost track of who is in charge of what, and this confusion shows no sign of abating. The Next Century Foundation wishes to provide some much needed clarity regarding the current situation in Libya. The Government of National Accord (GNA) – The GNA was established in 2015 in UN-backed negotiations to try and impose a stable authority in the region. It is the only internationally recognised government in Libya and is headed by Prime Minister Fayez al-Sarraj. Unfortunately, however, the GNA has failed to exercise any kind of authority extending beyond its very limited domain in western Libya, where it operates from Tripoli. Many argue that the GNA is a corrupt institution, accusing its leaders of earning exceptionally high salaries while doing little to resolve the country’s problems. The High Council of State – The High Council of State was formerly the General National Congress in Tripoli, formed in Libya’s first democratic elections in 2012. After its members refused to dissolve the congress in 2015 (and lose their salaries) a deal was struck during the UN negotiations to re-establish the congress as the ‘High Council of State’, an advisory body to the GNA. The reality, however, is that they have long since diminished as an influential political force. It is headed by Khaled al-Mishri, who replaced Abdarrahman Swehli in April 2018. He is a leading figure in the Justice and Construction Party (the Muslim Brotherhood in Libya). The Tobruk Parliament – also known as the House of Representatives, it was established after controversial national elections with a turnout of around 18% in 2014. It is based in Tobruk, a port city in the east of Libya. Its chairman is Aguila Saleh Issa who regards his Tobruk-based government (headed by Prime Minister Abdullah al-Thenni) to be the only legitimate government in Libya. It is also important to note that the Tobruk parliament has endorsed the leadership of General Khalifa Haftar. General Haftar – General Khalifa Haftar controls almost the entire east of Libya. With a personal militia force at his disposal (which he calls the ‘Libyan National Army’ (LNA)) and backing from Egypt, Saudi Arabia, the United Arab Emirates and France, Haftar has taken command of key strategic centres like Tobruk, Benghazi and most recently Derna. The capture of Derna on 28th June was an important step in consolidating Haftar’s position, as it remained the last sizeable bastion of opposition to him in the east. Prior to Haftar’s takeover, since October 2014 Derna had been led by the Shura council of Mujahadeen, a coalition of Islamist militias. On May 7th, General Haftar announced the “Zero Hour” for the “liberation of Derna” and his forces began ramping up their military offensive. Other, arguably more influential, centres of power in Libya are its financial institutions. Saddiq Kabir, for example, is head of the Central Bank and responsible for paying the salaries of many Libyans. Mustafa Sanalla is head of the National Oil Corporation and Abdullmaged Breish is head of the Libyan Investment Authority. It should also be noted that General Haftar recently attempted to oust Saddiq Kabir from his position as governor of the Central Bank but was unsuccessful. He accuses Libya’s Central Bank of funneling money to extremist groups and the Muslim Brotherhood. The external interference in Libya from countries near and far has done little to encourage a quicker resolution to the conflict. This is particularly evident in the way General Haftar’s support comes more from abroad than at home. Egypt, for example, has been supplying his forces with training and various weapons, even carrying out direct air raids in Derna against Haftar’s opponents. At the same time, the UAE are operating their largest foreign military base in Al Khadim, 100 kilometres east of Benghazi. In much the way Iran have entrenched a military presence in Syria aimed at lasting into the future, the UAE have identified the chaos in Libya as too good an opportunity to miss for extending their regional influence. France, on the other hand, has been hosting conferences in Paris aimed at fostering dialogue between General Haftar and al-Sarraj, all the while providing General Haftar with extensive military support during his endeavours in Derna and beyond. It would not be overly cynical to suggest that France’s main concern regarding Haftar’s quest for leadership is the financial benefits it could accrue through Libya’s oil. With such a multitude of foreign actors behind one man, Libyans have good reason to fear that they will be the ones benefitting least in any eventual political settlement. The complexity in the east is mirrored by the chaos along the southern border. Since 2011, the constant state of flux in Libya has made it very easy for neighbouring countries like Chad and Sudan to infiltrate the 1500-kilometre-long border as and when they like. There is no longer any effective government presence in the south, only ongoing struggles for authority and control amongst local militia forces. Since 2014, the presence of Chadian rebel group FACT in the southern Fezzan region has only increased: they have been reported to have taken temporary control of key areas in the city of Sabha for example. Counterbalancing this is the similarly sizable Sudanese presence in the south. Fighters from JEM, a Sudanese opposition group, have been fighting alongside Haftar’s forces. The various forces pulling against each other in the south highlight the difficulty that any central Libyan government will have in regaining full control of the area in the future. On May 29th, French President Emmanuel Macron hosted a summit in Paris with representatives from Libya’s four political factions: al-Sarraj, Haftar, Saleh and al-Mishri. Each representative endorsed a motion to hold elections in Libya on 10th December, when the mandates of the High Council of State and the Tobruk Parliament will run out. It was also agreed that by 16th September a constitutional basis and electoral laws would be established. Whether these elections (if held at all) will be fruitful, however, is another matter. In May of this year twelve people were killed in “an ISIS attack” on the headquarters of the Electoral Commission. Nor is it likely that there will be agreement on a draft constitution any time soon. A constitution is vital for providing a consensus around the rules and legal framework that would govern the elections. Particularly in Libya, elections in the absence of a constitution would be more likely to exacerbate conflict rather than resolve it. However, despite the relative consensus over the necessity for a constitution, there is still division over its content. Some Libyans want a referendum on the current draft constitution while others want a completely new text. There are also reports that the constitutional committee was abandoned after it became apparent that its leader had dual Libyan-American nationality. Whatever happens, once an agreement has been arrived at it is essential for the international community to support the decision of the Libyan people, 1 million of whom are registered to vote in December’s elections should they take place. On 14th June a coalition of armed forces seized the largest oil terminals in Libya’s eastern oil crescent, resulting in many civilian causalities and damage to infrastructure. General Haftar has since accused the Central Bank of channeling money to the militia leader responsible for blockading the oil terminals. Although General Haftar’s LNA was successful in recapturing the facilities on the 25th June, he announced that management of the facilities would be transferred not to the internationally recognised National Oil Corporation, but to a different NOC in the east. In retaliation, the official NOC imposed a force majeure on the oil terminals; 850,000 barrels a day were blocked from exportation and Libya lost over an estimated 900 million dollars. On the 11th July Haftar was made to hand back control of Libya’s oil ports to Sanalla’s NOC following a letter from US President Donald Trump that threatened legal action over Haftar’s crippling of Libya’s oil production. Although this relieved the immediate crisis, it brought to the fore underlying frustrations in Libya over the distribution of wealth and the plundering of resources. These concerns need to be addressed in order for political reconciliation to progress. The situation also highlighted the need to protect the country’s wealth so that – despite the political turmoil – public services will continue to function. Although the upcoming elections are heralded as a positive step forward by many, it is difficult to see how they will bring about any fruitful change while the country is so fragmented. If there is no constitution then corruption and political violence will only flourish. Divisions in Libya will also remain entrenched while international powers continue to exploit the region and prevent self-determination of the Libyan people. There is little point in diplomats congratulating themselves on rhetorical commitments to elections and ongoing dialogue, for there will be very little to congratulate until Libya reemerges as a functioning state. Indeed, the situation in Libya remains desperate. The al-Sarraj government has had three years to create some stability with a view to peace, and has yielded no results. Lawlessness in Tripoli is rife and the government turns a blind eye to foreign aircraft landing on Libyan territory at will. There has been a scarcity of bread, fuel, and electricity in the capital for years now, the Central Bank is regularly late in paying the salaries of much of the Libyan population, and the drafting of the new constitution has suffered numerous setbacks. Compounding the humanitarian crisis are the large numbers of refugees being trafficked through Western Libya from Chad, Niger, and Sudan. The position of the GNA in western Libya is also weakened by the growing threats of militias who control other nearby cities such as Misrata and Zintan. Exasperated by the lack of constructive change under al-Sarraj’s government, they plan to march on Tripoli to incite change in the capital. All of these failures are pointing in the direction of a change, a fresh approach in the governing of Libya. Whether the international community has enough credit to install a new government in place of al-Sarraj is doubtful considering their underwhelming track record. Nor can we be certain that the international community has the will to implement such wide-sweeping reform in what is now an even more divided Libya. The best hope for a Libyan government to reassert its sovereignty over the whole country is to find ways of making compromises which generate goodwill amongst the key domestic actors. General Haftar agreeing to allow four oil export ports to reopen is an example of this. At the same time, the kind of decentralised style of government which was so prominent in Libya following its independence must be the foundation from which oil rents can be fairly redistributed to help address dire living standards. Gradually, local authorities could coordinate with each other on the security front and move towards a unified national force. By no means is it an easy task, but it may represent an encouraging starting point on the way to rebuilding what is a terribly torn country. Why does France support General Haftar in Libya? On 29th May 2018, France convened an international meeting on Libya, bringing together representatives from its four divided political factions. This included Aguila Saleh (the Chair of the House of Representatives in Tobruk whose Prime Minister is Abdullah al-Theni), Khalid al-Mishri (the head of the High Council of State in Tripoli which was originally the old congress), Fayez al-Sarraj (the head of the internationally recognised Presidential Council) and General Khalifa Haftar. General Haftar, commander of the so-called Libyan National Army (LNA), has taken control over much of eastern Libya. He has command of the strategic port city of Tobruk and Libya’s second largest city, Benghazi. In late June Haftar also took control of the city of Derna in a ground offensive by the LNA. This followed a two-year siege by Haftar’s forces and hundreds of civilian casualties. The main division in Libya, therefore, is between the internationally recognised Government of National Accord (GNA) in the West, headed by al-Sarraj, and Haftar’s forces in the East. Macron’s goal for the summit was to get all four Libyan sides to commit to an agreement under the auspices of the UN and to start arrangements for staging elections before the end of 2018. Perhaps unsurprisingly, no tangible results have come from this meeting. A similar meeting between al-Sarraj and Haftar in July 2017 also produced no positive outcome. It is becoming clear that these summits on Libya are heralded more as a diplomatic accomplishment for France rather than a genuine breakthrough in the conflict. Despite encouraging open dialogue and peaceful conflict resolution, however, France has maintained its controversial support for General Haftar for the past three years instead of backing the GNA, which was implemented by a UN-led initiative in 2015. Almost immediately after Macron’s summit at the end of May it became apparent that France had provided General Haftar with reconnaissance aircraft to help his forces advance on Derna. Why, then, is there such a discord between Macron’s rhetoric about peace and diplomacy on the one hand, and his provision of weaponry to a particular side of the conflict on the other? During the summit in May, Macron was keen to promote a quick presidential election in Libya, supposedly as a means to centralise the government and reduce tensions in the region. Many are arguing, however, that elections cannot happen until there is a constitution which would provide a set of rules and a legal framework to govern the elections. Many Libyans are afraid that elections in the absence of a constitution will only catalyse conflict rather than resolve it. It is likely, therefore, that France’s ambitions for a quick election in Libya are part of a coordinated step with the UAE and Egypt (Haftar’s other international supporters) to facilitate the General’s takeover while the GNA is weak. France ultimately sees Haftar as the ally who could best serve its interests in Libya, which is why they have supported the consolidation of his control in the east and are vying for his success in upcoming presidential elections. From a geopolitical standpoint, France wants to have a dominant international presence in Libya. Having had brief direct administrative rule from 1944-51 over Fezzan in southern Libya, it is keen to maintain a close presence in the region which is rich in reserves of oil, gas and minerals. This would also allow France to extend its influence over the nearby countries of Chad, Mali and Niger. Macron is also keen to compromise Italy’s interests in Libya, and chose a strategic moment for the summit (announcing it only a week beforehand) at a time when Italy was occupied with its own changing government. Despite Rome’s attempts to maintain a presence in Libya and curb the flow of migrants across the Mediterranean, its influence in Tripoli has waned of late. Italy’s ties with western Libya had previously been through the city of Misrata, which is now largely autonomous and ruled by militias opposed to the GNA. France and Italy are also leading foreign stakeholders in the Libya’s hydrocarbons sector and have competing business interests in the country’s oil revenue. Therefore, by supporting Haftar France not only provides the military general with legitimacy but also asserts itself as the leading international actor in Libya’s internal politics and stands to gain financially. Haftar also presents himself as the military strength of Libya against terrorism, an image that France is keen to propagate. He claimed that his recent offensive on Derna, for instance, was in order to relieve the city of ‘terrorists and those who carry weapons against the LNA’. At a time when Libya needs unity and stability more than ever, international players like France need to prioritise the interests of Libyans above their own. Upcoming elections will be undermined if a constitution is not put in place to guarantee a safe transition to a centralised, democratically elected government. France needs to use its influence to smooth divisions in Libya, not exacerbate them. And the fourth is the UN-sponsored amalgam whose remit is to bring peace to the country. And the international powers watch Libya burn. None bar Italy actually have an embassy in Tripoli. The rest of us watch from afar, though it was us who created this mess. Italy has her reasons for being more proactively engaged of course, the migrant issue being chief among them. The river of migrants from Africa cuts for the coast through chaos-ridden Libya, and hence through the Med to Italy. A side-issue here. Italian PHD student Giulio Regeni was beaten to death in Egypt in January 2016. Apparently overzealous members of the security services had been prompted to ruthless murder because of his having met Muslim Brotherhood members as part of research for his thesis on trade unionism. Italy broke off diplomatic relations with Egypt in protest. So Egypt used its influence over General Haftar of Libya to get him to turn off the tap and stop the migration to Italy. Which Haftar, who had clout with the traffickers, rapidly did. As a consequence, Italy renewed its diplomatic ties with Egypt in September 2017. Meanwhile, ironically, someone Haftar had no power over re-commenced the trafficking. Haftar had cut migration to a trickle. Now, once again, it is a flood. But have any of us the courage to have a diplomatic mission in Tripoli, Libya? No. Well only Italy amongst the countries of the world, and they have no real choice. Understandable, perhaps. They all left for good soon after the US ambassador was murdered in Benghazi. But now the killing of the wonderful Chris Stephens in 2012 must be put behind us. It’s time to go back, and go back we must. It is an easy and economical step for which there may be huge dividends, and without which the tide of migrants will almost inevitably continue. It is a step we can and must take. The UK government is sometimes a poor listener, which can result in inefficient and ineffective dispersal of aid money. Increased communication with refugees, both in the camps to which they have been displaced in the first instance and subsequently in the UK, would inflate their esteem, morale and resolve. Most particularly with regard to those coming from war torn states, the international community in general and the UK in particular could empower local communities in the region to take control of their own destiny by giving them a voice in regard to the dispersal of international aid. An effort should be made to recruit and employ teachers, doctors and nurses or others appropriately qualified who are themselves refugees within the camps wherever possible; and government aid funds should be diverted to this purpose in preference to bringing in Western teachers, doctors and nurses and others to perform these roles. This both lifts morale and provides economic support to key refugees. Within the UK, there are initiatives such as Herts Welcomes Syrian Families, Refugee Action, and the Refugee Council, whose support of the Vulnerable Persons Relocation Scheme has positively affected thousands of migrants. However, the “temporary protection” which this programme permits is inadequate. Under this programme, migrants are offered the chance to study or work for a limited five year period only. We urge that this time period be extended or that they are offered fast track citizenship after five years. Trained migrant professionals are often not permitted to work in the UK whilst seeking asylum. Asylum seekers should be permitted to work in the United Kingdom whilst seeking indefinite leave to remain, should they wish to do so. The asylum seekers allowance is only £36.95 a week, which is evidently very small, especially when compared to the job-seekers allowance of £73.10. It makes life incredibly tenuous and is utterly unfair, given that they are then unable to work legally and become a burden on the taxpayer. However, whilst it is extremely important that refugees and asylum seekers should have the opportunity to work in the UK, it is also important to bear in mind that safeguards need to be put in place to see that they are not exploited by employers and that they are paid a fair wage for the job that they are doing. This is of importance in preventing bad feeling and resentment on the part of indigenous workers (the “immigrants” should not be perceived as a threat to the jobs and terms/conditions of employment of UK citizens). To be granted university places, all migrants whose status has yet to be determined must have lived half of their lives in the UK in order to apply as if they were native citizens. This denial of university education to the majority of young migrants whose status has yet to be determined prevents migrants from rebuilding their lives, and retaining their dignity. The Lawyers’ Refugee Initiative advocates the use of humanitarian visas, or “humanitarian passports” – that is to say visas for the specific purpose of seeking asylum on arrival – issued in the country of departure or intended embarkation. We urge that this procedure be used extensively by the United Kingdom. In order to speed up the processing of asylum applications and reduce legal costs and emotional strain for all involved, we recommend that the Home Office only appeal decisions in exceptional circumstances, and rarely if the case has been under consideration for more than five years. It should be a statutory duty that all appeals by the Home Office take place within one year and be grounded on strict criteria. The actual asylum application process should be based on criteria that are generous to genuine refugee claims with a mechanism for withdrawing status on conviction of a crime – and fast track citizenship after five years. We should regard refugees, whatever their circumstance, with compassion and mercy. Compassion and Mercy are moral virtues which elevate humanity and therefore our obligation to refugees transcends any obligation we may have to accept economic migrants and / or the free movement of labour and should not be confused with any such obligation – and the UK is not yet doing enough”. Note: The Next Century Foundation acknowledges the help of Initiatives of Change, an organisation that co-hosted the migration conference that contributed to the preparation of this submission. On 13 September 2017, Italy’s ambassador Giampaolo Cantini was sent back to the Egyptian capital after more than one year of soured relations between the two countries over the death of the Italian PhD Cambridge student, Giulio Regeni, in Cairo in January 2016. The 28-year-old student was tortured and killed in Egypt, allegedly by the Egyptian security services who, since the very outset of the affair, have denied any involvement. The issue quickly triggered an open diplomatic crisis between Egypt and Italy due to al-Sīsī’s government’s repeated avoidance of their responsibility to investigate the murder in the face of hard evidence implying that the Egyptian security services were culpable. For more than one year, faced with the hardline stance taken by the Italian government as they strove to obtain the names of those responsible and the reason for this abhorrent act, the Egyptian authorities have been trying to cover up the truth, forging documents and misleading Italian magistrates with false trails. This misdirection is the umpteenth deplorable act of a state whose crackdown on human rights is going down in history as one of the worst in years. And while everything seemed to suggest the diplomatic deadlock was unlikely to break, out of the blue the Italian ambassador was sent back to Cairo and the crisis magically resolved, as if it had never happened. No change of strategy, official apology or acknowledgment of guilt was issued by the Egyptian authorities. Likewise, no clear explanation was provided by the Italian government on the matter. So, what led the Italian government to take the incongruous decision to give up its legitimate right to pursue the truth about the brutal death of one of its citizens in a foreign land? Interestingly, the solution to this conundrum may not lie too far away. And with a subtle combination of imagination and cynicism, we might be able to find it. If the world ran according to a Machiavellian conception of politics, then one might think that everything happens for a reason and nothing in politics is left to chance. Accordingly, one might think for instance that the investigation into the death of Giulio was sidelined in exchange for a halt of the migration flow from Libya to Italy, given the strong friendship that binds Al-Sīsī to Haftar, the Libyan strongman in control of the eastern part of the country. Indeed, the bizarre coincidence of the sudden halt in migrant influxes to Italy on those same days when the Italian ambassador was sent back to Cairo, after years of unsuccessful attempts to curb them, might represent enough evidence to a more cynical mind. Or, equally, the complacency of the Italian government in not taking action when confronted with some “explosive evidence” on the case provided by the Obama administration could serve as a further clue in this respect. Nobody will ever know what happened on those days for it is no longer the intention of the Italian government to unravel the truth. People will never know for sure why Giulio was killed, who tortured and assassinated him; neither will they know why the Italian government abruptly sent its ambassador back to Cairo, forever waiving the right to justice for one of its citizens, a son of Italy. The truth will be covered up, wiped out according in the Italian tradition of state secrets. And now only sorrow is left. Sorrow of a girlfriend in losing the love of her life. Sorrow of a family in losing a son. Sorrow of a nation in losing its future and its honour. Yes, its honour. Honour because Giulio is not just a human viciously slaughtered on foreign soil. Giulio represents a vision, a feeling, an idea. The idea that unites men and women of different countries and different cultures; the idea that human rights violations in Egypt are real, raw and ruthless, and affect men and women whatever their nationality; the idea that Italy is a country whose leaders had no hesitation in selling the truth, trust and hope of its own citizens as well as its own dignity in exchange for some political or economic payoff; the idea that western democracies “fill their mouths” with nice words on human rights but that after all it is a mere façade, as they continue to aid and abet such crimes and violations where convenient. There is a Latin saying whose power and meaning has always struck me. It expresses the universal principle of a vision, a feeling, an idea. The Truth. “Veritas Omnia Vincit”, truth conquers everything. And Giulio represents the Truth, for his death has shined a light on the lies, the falsehood, the cruelty and the wickedness of a global system that brings together democracies and dictatorships, thus rendering them accomplices. It does not matter that the official version will never admit the existence of any deal, agreement or negotiation between Italy and Egypt in exchange for silence on the death of Giulio. For the conspicuous silence on the part of Italian government speaks louder than any official statement. Veritas Omnia Vincit, when public outcry spreads across the world after Giulio’s death, against al-Sīsī’s authoritarian rule, thus uniting men and women who, just like Giulio’s family, have lost their loved ones. And again, Veritas Omnia Vincit, when the mask of this self-proclaimed democracy is removed revealing the true face of power. I recently visited a Banksy exhibition at the Moco museum in Amsterdam. I was taken aback by how the author emphasised the existence of a thread that connects sorrow to hope and love. In suffering and grief people can gather and unite, taking solace from the shared experience of finding justice, truth or stillness. Such feelings bring them hope. And being able to connect and to hope means being able to love. This is what is happening in Egypt, Italy and elsewhere in the world at the moment. The sorrow caused by the circumstances of Giulio’s death has spread across the globe, uniting people in hope for justice, for “truth” and for a better world. “Only in the darkness can you see the stars”, (Martin Luther King Jr). Giulio is your son, your brother, your cousin; Giulio is your colleague, your neighbour, your friend; Giulio is a vision, a feeling, an idea. Giulio is hope, love and truth, and he has already won. The Next Century Foundation took part in the 36th session of the Human Rights Council in Geneva. During the General Debate on Item 10 “Technical Assistance and Capacity-building” the NCF delivered an oral intervention on the shortcomings of the current UN strategy in Libya, Syria and Bahrain and the steps that should be taken. Al-Sarrāj and Haftar: a Turning Point for Libya? Following the French-brokered peace talks on July 25 between the Libyan military strongman, General Khalīfa Belqāsim Ḥaftar and Fāyez Muṣṭafā al-Sarrāj, Prime Minister of the Government of National Accord of Libya, an agreement for a national reconciliation process of the North African country seems to have apparently been attained. The settlement reached constitutes one first step towards a widely endorsed power-sharing solution involving the two biggest factions of the country. On the one hand, al-Sarrāj’s UN-backed government in Tripoli exercises strong power over most of the western part of the country – including a good share of those areas formerly under the control of anti-Gaddafi militias. On the other hand, General Haftar – who seized control of the eastern part of Libya – has been emerging as an essential actor in addressing the threats of jihadism and migration, thus demonstrating to European powers his strategic role for their domestic interests. In spite of the enthusiasm for such a certainly positive turn of events in the country, however, a few concerns relating to some “technical aspects” of the matter should be expressed. It is no secret that General Haftar is an ambiguous figure who teeters upon the brink between being a strong military leader and a potential future dictator. His thirst for power as well as his unorthodox approach to tackling jihadists and migration flows towards Europe might be a sufficient red flag for the international community to cast doubts on his reliability as a potential next leader of the country. Second, the power-sharing solution negotiated at the peace talks is inherently flawed. Despite the great influence the two leaders have in Libya, the rest of the country is still strongly divided. Libya is currently split into several militia zones controlled by the most disparate military groups. Each of them would hardly be inclined to relinquish power, and thus may potentially constitute a threat to the stability of the country if not involved in the peace talks. A power-sharing settlement would be, in this sense, irrelevant if not all of the main parties and factions are involved in the process. Interestingly, statistic records from cases where a national reconciliation process was implemented through a power-sharing settlement show how greater inclusiveness in the peace process is correlated to major likelihood of success of the process itself. Within this framework, while strong doubts emerge over the lasting effectiveness of the agreement that has been reached, the sole certain fact is that the current shambles in Libya is more of a brutal reflection of an underlying struggle between foreign powers for the future control of the precious Libyan resources. In this sense, supporting either al-Sarrāj or Haftar, or both, is only a question of strategy and, after all, another way of saying that peace does not really matter on the geopolitical chessboard. Hartzell, Caroline, and Matthew Hoddie. 2003. “Institutionalizing Peace: Power Sharing And Post-Civil War Conflict Management”. American Journal Of Political Science 47 (2): 318. doi:10.2307/3186141. We say goodbye to 2016 which has been the year of war and more war in the Middle East, and in the West the arrival of Brexit and Donald Trump have heralded a new era of antiestablishmentarianism that some have found disconcerting. An effort should be made to recruit and employ teachers, doctors and nurses or others appropriately qualified who are themselves refugees within the camps wherever possible, and government aid funds should be diverted to this purpose, in preference to bringing in Western teachers, doctors and nurses and others to perform these roles. This both lifts morale and provides economic support to key refugees. That greater emphasis be given to delivering education in refugee camps. That international governments consult local people regarding actions that affect their wellbeing before taking those actions. And that where possible, most particularly in war torn nations, the international community empower local communities to take control of their own destiny, e.g. by giving them a voice in regard to the dispersal of international aid. We support an expansion of the definition of refugee under international law to incorporate those displaced by environmental disasters, in particular those human-caused. Whilst the current definition of refugee encompasses the persecuted (as well as by de facto practice those displaced by war), a new legal framework is needed to also address the needs of communities affected by climate change where that climate change is life threatening as in cases of famine as a result of severe desertification or in cases of population displacement because of rising sea levels. That asylum seekers be permitted to work in the United Kingdom whilst seeking asylum, should they wish to do so. That the concept of “temporary protection” including permission to work and / or study in the United Kingdom for a limited period be further extended beyond the current Vulnerable Persons Relocation Scheme. That the concept of “humanitarian passports” and of registration for asylum within the region be developed further. The Lawyers’ Refugee Initiative advocates the extensive use of humanitarian visas – that is to say visas for the specific purpose of seeking asylum on arrival – issued in the country of departure or intended embarkation. In order to speed up the processing of asylum applications and reduce legal costs and emotional strain for all involved, we recommend that the Home Office only appeal decisions in exceptional circumstances, and rarely if the case has been under consideration for more than five years. It should perhaps be a statutory duty that all Home Office appeals must take place within one year and be grounded on strict criteria. The actual asylum application process should have inspectors who ensure that decisions are made on independent criteria that are generous to genuine refugee claims with a mechanism for withdrawing status for five years on conviction of a crime or proven false information – and fast track citizenship after five years. IRAQ: That a special task force be appointed to provide aid and support to IDPs (internally displaced persons) in and from Ninevah, Anbar and Salah ad Din Provinces so that the community in Northern and Western Iraq feel a sense of hope and encouragement. LIBYA: The return of the Ambassadors of the United Kingdom, Italy and France to Libya to support the new internationally recognised government of Libya. That the international community agree to the request from the new internationally recognised government of Libya for help with land mine clearance – or at the very least technical support and training for land mine clearance. SYRIA: That a ‘track two’ conference be convened which participants would attend without precondition and that would welcome members of the government, key international players and those from any faction of the opposition. That the communities in refugee receiving countries be encouraged by faith leaders to welcome to their homes people new to the area of other faiths or cultures with no agenda other than that of befriending them and offering a listening ear. The West needs to rediscover the dynamic of its own rich spiritual tradition. At best this has been the engine of social advance, just governance and effective peacemaking for our countries. Too often as a civilisation we project an image of material self-seeking, and miss the active comradeship we could enjoy with believers from other traditions. We also commend the international community to regard refugees, whatever their circumstance, with compassion and mercy. It is our duty to our fellow men and women to treat those in distress with compassion. Compassion is love in action. Although we are not legally obliged to accept refugees, we do have a moral duty to significantly help ameliorate their situation so that they can take temporary refuge in countries neighbouring their own. That duty is a duty to humanity that transcends any obligation we may have to accept economic migrants and / or the free movement of labour and should not be confused with any such obligation – and we are not yet doing enough.
2019-04-19T20:13:27Z
https://nextcenturyfoundation.wordpress.com/category/libya/
Daniel 1:2 And the Lord gave Jehoiakim king of Judah into his hand, with part of the vessels of the house of God which he carried into the land of Shinar to the house of his god; and he brought the vessels into the treasure house of his god. Daniel 1:5 And the king appointed them a daily provision of the king's meat, and of the wine which he drank so nourishing them three years, that at the end thereof they might stand before the king. Daniel 1:7 Unto whom the prince of the eunuchs gave names for he gave unto Daniel the name of Belteshazzar; and to Hananiah, of Shadrach; and to Mishael, of Meshach; and to Azariah, of Abed-nego. Daniel 1:8 But Daniel purposed in his heart that he would not defile himself with the portion of the king's meat, nor with the wine which he drank therefore he requested of the prince of the eunuchs that he might not defile himself. Daniel 1:10 And the prince of the eunuchs said unto Daniel, I fear my lord the king, who hath appointed your meat and your drink for why should he see your faces worse liking than the children which are of your sort? then shall ye make me endanger my head to the king. Daniel 1:13 Then let our countenances be looked upon before thee, and the countenance of the children that eat of the portion of the king's meat and as thou seest, deal with thy servants. Daniel 1:14 So he consented to them in this matter, and proved them ten days. Daniel 1:15 And at the end of ten days their countenances appeared fairer and fatter in flesh than all the children which did eat the portion of the king's meat. Daniel 1:16 Thus Melzar took away the portion of their meat, and the wine that they should drink; and gave them pulse. Daniel 1:17 As for these four children, God gave them knowledge and skill in all learning and wisdom and Daniel had understanding in all visions and dreams. Daniel 1:18 Now at the end of the days that the king had said he should bring them in, then the prince of the eunuchs brought them in before Nebuchadnezzar. Daniel 1:19 And the king communed with them; and among them all was found none like Daniel, Hananiah, Mishael, and Azariah therefore stood they before the king. Daniel 1:21 And Daniel continued even unto the first year of king Cyrus. Daniel 2:8 The king answered and said, I know of certainty that ye would gain the time, because ye see the thing is gone from me. Daniel 2:10 The Chaldeans answered before the king, and said, There is not a man upon the earth that can shew the king's matter: therefore there is no king, lord, nor ruler, that asked such things at any magician, or astrologer, or Chaldean. Daniel 2:13 And the decree went forth that the wise men should be slain; and they sought Daniel and his fellows to be slain. Daniel 2:15 He answered and said to Arioch the king's captain, Why is the decree so hasty from the king? Then Arioch made the thing known to Daniel. Daniel 2:18 That they would desire mercies of the God of heaven concerning this secret; that Daniel and his fellows should not perish with the rest of the wise men of Babylon. Daniel 2:22 He revealeth the deep and secret things he knoweth what is in the darkness, and the light dwelleth with him. Daniel 2:23 I thank thee, and praise thee, O thou God of my fathers, who hast given me wisdom and might, and hast made known unto me now what we desired of thee for thou hast now made known unto us the king's matter. Daniel 2:24 Therefore Daniel went in unto Arioch, whom the king had ordained to destroy the wise men of Babylon he went and said thus unto him; Destroy not the wise men of Babylon bring me in before the king, and I will shew unto the king the interpretation. Daniel 2:25 Then Arioch brought in Daniel before the king in haste and said thus unto him, I have found a man of the captives of Judah, that will make known unto the king the interpretation. Daniel 2:29 As for thee, O king, thy thoughts came into thy mind upon thy bed, what should come to pass hereafter and he that revealeth secrets maketh known to thee what shall come to pass. Daniel 2:35 Then was the iron, the clay, the brass, the silver, and the gold, broken to pieces together, and became like the chaff of the summer threshingfloors; and the wind carried them away, that no place was found for them and the stone that smote the image became a great mountain, and filled the whole earth. Daniel 2:37 Thou, O king, art a king of kings for the God of heaven hath given thee a kingdom, power, and strength, and glory. Daniel 2:38 And wheresoever the children of men dwell, the beasts of the field and the fowls of the heaven hath he given into thine hand, and hath made thee ruler over them all. Thou art this head of gold. Daniel 2:39 And after thee shall arise another kingdom inferior to thee, and another third kingdom of brass, which shall bear rule over all the earth. Daniel 2:40 And the fourth kingdom shall be strong as iron forasmuch as iron breaketh in pieces and subdueth all things and as iron that breaketh all these, shall it break in pieces and bruise. Daniel 2:43 And whereas thou sawest iron mixed with miry clay, they shall mingle themselves with the seed of men but they shall not cleave one to another, even as iron is not mixed with clay. Daniel 2:44 And in the days of these kings shall the God of heaven set up a kingdom, which shall never be destroyed and the kingdom shall not be left to other people, but it shall break in pieces and consume all these kingdoms, and it shall stand for ever. Daniel 2:45 Forasmuch as thou sawest that the stone was cut out of the mountain without hands, and that it brake in pieces the iron, the brass, the clay, the silver, and the gold; the great God hath made known to the king what shall come to pass hereafter and the dream is certain, and the interpretation thereof sure. Daniel 2:49 Then Daniel requested of the king, and he set Shadrach, Meshach, and Abed-nego, over the affairs of the province of Babylon but Daniel sat in the gate of the king. Daniel 3:1 Nebuchadnezzar the king made an image of gold, whose height was threescore cubits, and the breadth thereof six cubits he set it up in the plain of Dura, in the province of Babylon. Daniel 3:2 Then Nebuchadnezzar the king sent to gather together the princes, the governors, and the captains, the judges, the treasurers, the counsellers, the sheriffs, and all the rulers of the provinces, to come to the dedication of the image which Nebuchadnezzar the king had set up. Daniel 3:3 Then the princes, the governors, and captains, the judges, the treasurers, the counsellers, the sheriffs, and all the rulers of the provinces, were gathered together unto the dedication of the image that Nebuchadnezzar the king had set up; and they stood before the image that Nebuchadnezzar had set up. Daniel 3:7 Therefore at that time, when all the people heard the sound of the cornet, flute, harp, sackbut, psaltery, and all kinds of musick, all the people, the nations, and the languages, fell down and worshipped the golden image that Nebuchadnezzar the king had set up. Daniel 3:8 Wherefore at that time certain Chaldeans came near, and accused the Jews. Daniel 3:9 They spake and said to the king Nebuchadnezzar, O king, live for ever. Daniel 3:11 And whoso falleth not down and worshippeth, that he should be cast into the midst of a burning fiery furnace. Daniel 3:12 There are certain Jews whom thou hast set over the affairs of the province of Babylon, Shadrach, Meshach, and Abed-nego; these men, O king, have not regarded thee they serve not thy gods, nor worship the golden image which thou hast set up. Daniel 3:13 Then Nebuchadnezzar in his rage and fury commanded to bring Shadrach, Meshach, and Abed-nego. Then they brought these men before the king. Daniel 3:14 Nebuchadnezzar spake and said unto them, Is it true, O Shadrach, Meshach, and Abed-nego, do not ye serve my gods, nor worship the golden image which I have set up? Daniel 3:15 Now if ye be ready that at what time ye hear the sound of the cornet, flute, harp, sackbut, psaltery, and dulcimer, and all kinds of musick, ye fall down and worship the image which I have made; well but if ye worship not, ye shall be cast the same hour into the midst of a burning fiery furnace; and who is that God that shall deliver you out of my hands? Daniel 3:16 Shadrach, Meshach, and Abed-nego, answered and said to the king, O Nebuchadnezzar, we are not careful to answer thee in this matter. Daniel 3:18 But if not, be it known unto thee, O king, that we will not serve thy gods, nor worship the golden image which thou hast set up. Daniel 3:19 Then was Nebuchadnezzar full of fury, and the form of his visage was changed against Shadrach, Meshach, and Abed-nego therefore he spake, and commanded that they should heat the furnace one seven times more than it was wont to be heated. Daniel 3:20 And he commanded the most mighty men that were in his army to bind Shadrach, Meshach, and Abed-nego, and to cast them into the burning fiery furnace. Daniel 3:21 Then these men were bound in their coats, their hosen, and their hats, and their other garments, and were cast into the midst of the burning fiery furnace. Daniel 3:22 Therefore because the king's commandment was urgent, and the furnace exceeding hot, the flame of the fire slew those men that took up Shadrach, Meshach, and Abed-nego. Daniel 3:23 And these three men, Shadrach, Meshach, and Abed-nego, fell down bound into the midst of the burning fiery furnace. Daniel 3:24 Then Nebuchadnezzar the king was astonied, and rose up in haste, and spake, and said unto his counsellers, Did not we cast three men bound into the midst of the fire? They answered and said unto the king, True, O king. Daniel 3:26 Then Nebuchadnezzar came near to the mouth of the burning fiery furnace, and spake, and said, Shadrach, Meshach, and Abed-nego, ye servants of the most high God, come forth, and come hither. Then Shadrach, Meshach, and Abed-nego, came forth of the midst of the fire. Daniel 3:27 And the princes, governors, and captains, and the king's counsellers, being gathered together, saw these men, upon whose bodies the fire had no power, nor was an hair of their head singed, neither were their coats changed, nor the smell of fire had passed on them. Daniel 3:28 Then Nebuchadnezzar spake, and said, Blessed be the God of Shadrach, Meshach, and Abed-nego, who hath sent his angel, and delivered his servants that trusted in him, and have changed the king's word, and yielded their bodies, that they might not serve nor worship any god, except their own God. Daniel 3:29 Therefore I make a decree, That every people, nation, and language, which speak any thing amiss against the God of Shadrach, Meshach, and Abed-nego, shall be cut in pieces, and their houses shall be made a dunghill because there is no other God that can deliver after this sort. Daniel 3:30 Then the king promoted Shadrach, Meshach, and Abed-nego, in the province of Babylon. Daniel 4:1 Nebuchadnezzar the king, unto all people, nations, and languages, that dwell in all the earth; Peace be multiplied unto you. Daniel 4:2 I thought it good to shew the signs and wonders that the high God hath wrought toward me. Daniel 4:5 I saw a dream which made me afraid, and the thoughts upon my bed and the visions of my head troubled me. Daniel 4:7 Then came in the magicians, the astrologers, the Chaldeans, and the soothsayers and I told the dream before them; but they did not make known unto me the interpretation thereof. Daniel 4:10 Thus were the visions of mine head in my bed; I saw, and behold a tree in the midst of the earth, and the height thereof was great. Daniel 4:12 The leaves thereof were fair, and the fruit thereof much, and in it was meat for all the beasts of the field had shadow under it, and the fowls of the heaven dwelt in the boughs thereof, and all flesh was fed of it. Daniel 4:16 Let his heart be changed from man's, and let a beast's heart be given unto him; and let seven times pass over him. Daniel 4:17 This matter is by the decree of the watchers, and the demand by the word of the holy ones to the intent that the living may know that the most High ruleth in the kingdom of men, and giveth it to whomsoever he will, and setteth up over it the basest of men. Daniel 4:18 This dream I king Nebuchadnezzar have seen. Now thou, O Belteshazzar, declare the interpretation thereof, forasmuch as all the wise men of my kingdom are not able to make known unto me the interpretation but thou art able; for the spirit of the holy gods is in thee. Daniel 4:22 It is thou, O king, that art grown and become strong for thy greatness is grown, and reacheth unto heaven, and thy dominion to the end of the earth. Daniel 4:26 And whereas they commanded to leave the stump of the tree roots; thy kingdom shall be sure unto thee, after that thou shalt have known that the heavens do rule. Daniel 4:29 At the end of twelve months he walked in the palace of the kingdom of Babylon. Daniel 4:31 While the word was in the king's mouth, there fell a voice from heaven, saying, O king Nebuchadnezzar, to thee it is spoken; The kingdom is departed from thee. Daniel 4:32 And they shall drive thee from men, and thy dwelling shall be with the beasts of the field they shall make thee to eat grass as oxen, and seven times shall pass over thee, until thou know that the most High ruleth in the kingdom of men, and giveth it to whomsoever he will. Daniel 4:33 The same hour was the thing fulfilled upon Nebuchadnezzar and he was driven from men, and did eat grass as oxen, and his body was wet with the dew of heaven, till his hairs were grown like eagles' feathers, and his nails like birds' claws. Daniel 4:35 And all the inhabitants of the earth are reputed as nothing and he doeth according to his will in the army of heaven, and among the inhabitants of the earth and none can stay his hand, or say unto him, What doest thou? Daniel 4:36 At the same time my reason returned unto me; and for the glory of my kingdom, mine honour and brightness returned unto me; and my counsellers and my lords sought unto me; and I was established in my kingdom, and excellent majesty was added unto me. Daniel 4:37 Now I Nebuchadnezzar praise and extol and honour the King of heaven, all whose works are truth, and his ways judgment and those that walk in pride he is able to abase. Daniel 5:1 Belshazzar the king made a great feast to a thousand of his lords, and drank wine before the thousand. Daniel 5:3 Then they brought the golden vessels that were taken out of the temple of the house of God which was at Jerusalem; and the king, and his princes, his wives, and his concubines, drank in them. Daniel 5:5 In the same hour came forth fingers of a man's hand, and wrote over against the candlestick upon the plaister of the wall of the king's palace and the king saw the part of the hand that wrote. Daniel 5:8 Then came in all the king's wise men but they could not read the writing, nor make known to the king the interpretation thereof. Daniel 5:12 Forasmuch as an excellent spirit, and knowledge, and understanding, interpreting of dreams, and shewing of hard sentences, and dissolving of doubts, were found in the same Daniel, whom the king named Belteshazzar now let Daniel be called, and he will shew the interpretation. Daniel 5:13 Then was Daniel brought in before the king. And the king spake and said unto Daniel, Art thou that Daniel, which art of the children of the captivity of Judah, whom the king my father brought out of Jewry? Daniel 5:16 And I have heard of thee, that thou canst make interpretations, and dissolve doubts now if thou canst read the writing, and make known to me the interpretation thereof, thou shalt be clothed with scarlet, and have a chain of gold about thy neck, and shalt be the third ruler in the kingdom. Daniel 5:19 And for the majesty that he gave him, all people, nations, and languages, trembled and feared before him whom he would he slew; and whom he would he kept alive; and whom he would he set up; and whom he would he put down. Daniel 5:21 And he was driven from the sons of men; and his heart was made like the beasts, and his dwelling was with the wild asses they fed him with grass like oxen, and his body was wet with the dew of heaven; till he knew that the most high God ruled in the kingdom of men, and that he appointeth over it whomsoever he will. Daniel 5:25 And this is the writing that was written, MENE, MENE, TEKEL, UPHARSIN. Daniel 5:26 This is the interpretation of the thing MENE; God hath numbered thy kingdom, and finished it. Daniel 5:27 TEKEL; Thou art weighed in the balances, and art found wanting. Daniel 5:28 PERES; Thy kingdom is divided, and given to the Medes and Persians. Daniel 5:29 Then commanded Belshazzar, and they clothed Daniel with scarlet, and put a chain of gold about his neck, and made a proclamation concerning him, that he should be the third ruler in the kingdom. Daniel 5:31 And Darius the Median took the kingdom, being about threescore and two years old. Daniel 6:2 And over these three presidents; of whom Daniel was first that the princes might give accounts unto them, and the king should have no damage. Daniel 6:3 Then this Daniel was preferred above the presidents and princes, because an excellent spirit was in him; and the king thought to set him over the whole realm. Daniel 6:5 Then said these men, We shall not find any occasion against this Daniel, except we find it against him concerning the law of his God. Daniel 6:6 Then these presidents and princes assembled together to the king, and said thus unto him, King Darius, live for ever. Daniel 6:8 Now, O king, establish the decree, and sign the writing, that it be not changed, according to the law of the Medes and Persians, which altereth not. Daniel 6:9 Wherefore king Darius signed the writing and the decree. Daniel 6:11 Then these men assembled, and found Daniel praying and making supplication before his God. Daniel 6:12 Then they came near, and spake before the king concerning the king's decree; Hast thou not signed a decree, that every man that shall ask a petition of any God or man within thirty days, save of thee, O king, shall be cast into the den of lions? The king answered and said, The thing is true, according to the law of the Medes and Persians, which altereth not. Daniel 6:13 Then answered they and said before the king, That Daniel, which is of the children of the captivity of Judah, regardeth not thee, O king, nor the decree that thou hast signed, but maketh his petition three times a day. Daniel 6:14 Then the king, when he heard these words, was sore displeased with himself, and set his heart on Daniel to deliver him and he laboured till the going down of the sun to deliver him. Daniel 6:15 Then these men assembled unto the king, and said unto the king, Know, O king, that the law of the Medes and Persians is, That no decree nor statute which the king establisheth may be changed. Daniel 6:17 And a stone was brought and laid upon the mouth of the den; and the king sealed it with his own signet, and with the signet of his lords; that the purpose might not be changed concerning Daniel. Daniel 6:18 Then the king went to his palace, and passed the night fasting neither were instruments of musick brought before him and his sleep went from him. Daniel 6:19 Then the king arose very early in the morning, and went in haste unto the den of lions. Daniel 6:20 And when he came to the den, he cried with a lamentable voice unto Daniel and the king spake and said to Daniel, O Daniel, servant of the living God, is thy God, whom thou servest continually, able to deliver thee from the lions? Daniel 6:21 Then said Daniel unto the king, O king, live for ever. Daniel 6:22 My God hath sent his angel, and hath shut the lions' mouths, that they have not hurt me forasmuch as before him innocency was found in me; and also before thee, O king, have I done no hurt. Daniel 6:23 Then was the king exceeding glad for him, and commanded that they should take Daniel up out of the den. So Daniel was taken up out of the den, and no manner of hurt was found upon him, because he believed in his God. Daniel 6:26 I make a decree, That in every dominion of my kingdom men tremble and fear before the God of Daniel for he is the living God, and stedfast for ever, and his kingdom that which shall not be destroyed, and his dominion shall be even unto the end. Daniel 6:28 So this Daniel prospered in the reign of Darius, and in the reign of Cyrus the Persian. Daniel 7:1 In the first year of Belshazzar king of Babylon Daniel had a dream and visions of his head upon his bed then he wrote the dream, and told the sum of the matters. Daniel 7:3 And four great beasts came up from the sea, diverse one from another. Daniel 7:4 The first was like a lion, and had eagle's wings I beheld till the wings thereof were plucked, and it was lifted up from the earth, and made stand upon the feet as a man, and a man's heart was given to it. Daniel 7:5 And behold another beast, a second, like to a bear, and it raised up itself on one side, and it had three ribs in the mouth of it between the teeth of it and they said thus unto it, Arise, devour much flesh. Daniel 7:7 After this I saw in the night visions, and behold a fourth beast, dreadful and terrible, and strong exceedingly; and it had great iron teeth it devoured and brake in pieces, and stamped the residue with the feet of it and it was diverse from all the beasts that were before it; and it had ten horns. Daniel 7:8 I considered the horns, and, behold, there came up among them another little horn, before whom there were three of the first horns plucked up by the roots and, behold, in this horn were eyes like the eyes of man, and a mouth speaking great things. Daniel 7:10 A fiery stream issued and came forth from before him thousand thousands ministered unto him, and ten thousand times ten thousand stood before him the judgment was set, and the books were opened. Daniel 7:12 As concerning the rest of the beasts, they had their dominion taken away yet their lives were prolonged for a season and time. Daniel 7:14 And there was given him dominion, and glory, and a kingdom, that all people, nations, and languages, should serve him his dominion is an everlasting dominion, which shall not pass away, and his kingdom that which shall not be destroyed. Daniel 7:15 I Daniel was grieved in my spirit in the midst of my body, and the visions of my head troubled me. Daniel 7:17 These great beasts, which are four, are four kings, which shall arise out of the earth. Daniel 7:24 And the ten horns out of this kingdom are ten kings that shall arise and another shall rise after them; and he shall be diverse from the first, and he shall subdue three kings. Daniel 7:25 And he shall speak great words against the most High, and shall wear out the saints of the most High, and think to change times and laws and they shall be given into his hand until a time and times and the dividing of time. Daniel 7:28 Hitherto is the end of the matter. As for me Daniel, my cogitations much troubled me, and my countenance changed in me but I kept the matter in my heart. Daniel 8:1 In the third year of the reign of king Belshazzar a vision appeared unto me, even unto me Daniel, after that which appeared unto me at the first. Daniel 8:2 And I saw in a vision; and it came to pass, when I saw, that I was at Shushan in the palace, which is in the province of Elam; and I saw in a vision, and I was by the river of Ulai. Daniel 8:3 Then I lifted up mine eyes, and saw, and, behold, there stood before the river a ram which had two horns and the two horns were high; but one was higher than the other, and the higher came up last. Daniel 8:4 I saw the ram pushing westward, and northward, and southward; so that no beasts might stand before him, neither was there any that could deliver out of his hand; but he did according to his will, and became great. Daniel 8:5 And as I was considering, behold, an he goat came from the west on the face of the whole earth, and touched not the ground and the goat had a notable horn between his eyes. Daniel 8:6 And he came to the ram that had two horns, which I had there seen standing before the river, and ran unto him in the fury of his power. Daniel 8:7 And I saw him come close unto the ram, and he was moved with choler against him, and smote the ram, and brake his two horns and there was no power in the ram to stand before him, but he cast him down to the ground, and stamped upon him and there was none that could deliver the ram out of his hand. Daniel 8:8 Therefore the he goat waxed very great and when he was strong, the great horn was broken; and for it came up four notable ones toward the four winds of heaven. Daniel 8:17 So he came near where I stood and when he came, I was afraid, and fell upon my face but he said unto me, Understand, O son of man for at the time of the end shall be the vision. Daniel 8:18 Now as he was speaking with me, I was in a deep sleep on my face toward the ground but he touched me, and set me upright. Daniel 8:19 And he said, Behold, I will make thee know what shall be in the last end of the indignation for at the time appointed the end shall be. Daniel 8:21 And the rough goat is the king of Grecia and the great horn that is between his eyes is the first king. Daniel 8:24 And his power shall be mighty, but not by his own power and he shall destroy wonderfully, and shall prosper, and practise, and shall destroy the mighty and the holy people. Daniel 8:25 And through his policy also he shall cause craft to prosper in his hand; and he shall magnify himself in his heart, and by peace shall destroy many he shall also stand up against the Prince of princes; but he shall be broken without hand. Daniel 8:26 And the vision of the evening and the morning which was told is true wherefore shut thou up the vision; for it shall be for many days. Daniel 9:2 In the first year of his reign I Daniel understood by books the number of the years, whereof the word of the LORD came to Jeremiah the prophet, that he would accomplish seventy years in the desolations of Jerusalem. Daniel 9:6 Neither have we hearkened unto thy servants the prophets, which spake in thy name to our kings, our princes, and our fathers, and to all the people of the land. Daniel 9:7 O Lord, righteousness belongeth unto thee, but unto us confusion of faces, as at this day; to the men of Judah, and to the inhabitants of Jerusalem, and unto all Israel, that are near, and that are far off, through all the countries whither thou hast driven them, because of their trespass that they have trespassed against thee. Daniel 9:11 Yea, all Israel have transgressed thy law, even by departing, that they might not obey thy voice; therefore the curse is poured upon us, and the oath that is written in the law of Moses the servant of God, because we have sinned against him. Daniel 9:12 And he hath confirmed his words, which he spake against us, and against our judges that judged us, by bringing upon us a great evil for under the whole heaven hath not been done as hath been done upon Jerusalem. Daniel 9:13 As it is written in the law of Moses, all this evil is come upon us yet made we not our prayer before the LORD our God, that we might turn from our iniquities, and understand thy truth. Daniel 9:14 Therefore hath the LORD watched upon the evil, and brought it upon us for the LORD our God is righteous in all his works which he doeth for we obeyed not his voice. Daniel 9:15 And now, O Lord our God, that hast brought thy people forth out of the land of Egypt with a mighty hand, and hast gotten thee renown, as at this day; we have sinned, we have done wickedly. Daniel 9:16 O Lord, according to all thy righteousness, I beseech thee, let thine anger and thy fury be turned away from thy city Jerusalem, thy holy mountain because for our sins, and for the iniquities of our fathers, Jerusalem and thy people are become a reproach to all that are about us. Daniel 9:18 O my God, incline thine ear, and hear; open thine eyes, and behold our desolations, and the city which is called by thy name for we do not present our supplications before thee for our righteousnesses, but for thy great mercies. Daniel 9:19 O Lord, hear; O Lord, forgive; O Lord, hearken and do; defer not, for thine own sake, O my God for thy city and thy people are called by thy name. Daniel 9:23 At the beginning of thy supplications the commandment came forth, and I am come to shew thee; for thou art greatly beloved therefore understand the matter, and consider the vision. Daniel 9:25 Know therefore and understand, that from the going forth of the commandment to restore and to build Jerusalem unto the Messiah the Prince shall be seven weeks, and threescore and two weeks the street shall be built again, and the wall, even in troublous times. Daniel 9:26 And after threescore and two weeks shall Messiah be cut off, but not for himself and the people of the prince that shall come shall destroy the city and the sanctuary; and the end thereof shall be with a flood, and unto the end of the war desolations are determined. Daniel 9:27 And he shall confirm the covenant with many for one week and in the midst of the week he shall cause the sacrifice and the oblation to cease, and for the overspreading of abominations he shall make it desolate, even until the consummation, and that determined shall be poured upon the desolate. Daniel 10:1 In the third year of Cyrus king of Persia a thing was revealed unto Daniel, whose name was called Belteshazzar; and the thing was true, but the time appointed was long and he understood the thing, and had understanding of the vision. Daniel 10:2 In those days I Daniel was mourning three full weeks. Daniel 10:3 I ate no pleasant bread, neither came flesh nor wine in my mouth, neither did I anoint myself at all, till three whole weeks were fulfilled. Daniel 10:7 And I Daniel alone saw the vision for the men that were with me saw not the vision; but a great quaking fell upon them, so that they fled to hide themselves. Daniel 10:8 Therefore I was left alone, and saw this great vision, and there remained no strength in me for my comeliness was turned in me into corruption, and I retained no strength. Daniel 10:9 Yet heard I the voice of his words and when I heard the voice of his words, then was I in a deep sleep on my face, and my face toward the ground. Daniel 10:11 And he said unto me, O Daniel, a man greatly beloved, understand the words that I speak unto thee, and stand upright for unto thee am I now sent. And when he had spoken this word unto me, I stood trembling. Daniel 10:12 Then said he unto me, Fear not, Daniel for from the first day that thou didst set thine heart to understand, and to chasten thyself before thy God, thy words were heard, and I am come for thy words. Daniel 10:13 But the prince of the kingdom of Persia withstood me one and twenty days but, lo, Michael, one of the chief princes, came to help me; and I remained there with the kings of Persia. Daniel 10:14 Now I am come to make thee understand what shall befall thy people in the latter days for yet the vision is for many days. Daniel 10:15 And when he had spoken such words unto me, I set my face toward the ground, and I became dumb. Daniel 10:16 And, behold, one like the similitude of the sons of men touched my lips then I opened my mouth, and spake, and said unto him that stood before me, O my lord, by the vision my sorrows are turned upon me, and I have retained no strength. Daniel 10:17 For how can the servant of this my lord talk with this my lord? for as for me, straightway there remained no strength in me, neither is there breath left in me. Daniel 10:19 And said, O man greatly beloved, fear not peace be unto thee, be strong, yea, be strong. And when he had spoken unto me, I was strengthened, and said, Let my lord speak; for thou hast strengthened me. Daniel 10:20 Then said he, Knowest thou wherefore I come unto thee? and now will I return to fight with the prince of Persia and when I am gone forth, lo, the prince of Grecia shall come. Daniel 10:21 But I will shew thee that which is noted in the scripture of truth and there is none that holdeth with me in these things, but Michael your prince. Daniel 11:1 Also I in the first year of Darius the Mede, even I, stood to confirm and to strengthen him. Daniel 11:2 And now will I shew thee the truth. Behold, there shall stand up yet three kings in Persia; and the fourth shall be far richer than they all and by his strength through his riches he shall stir up all against the realm of Grecia. Daniel 11:4 And when he shall stand up, his kingdom shall be broken, and shall be divided toward the four winds of heaven; and not to his posterity, nor according to his dominion which he ruled for his kingdom shall be plucked up, even for others beside those. Daniel 11:6 And in the end of years they shall join themselves together; for the king's daughter of the south shall come to the king of the north to make an agreement but she shall not retain the power of the arm; neither shall he stand, nor his arm but she shall be given up, and they that brought her, and he that begat her, and he that strengthened her in these times. Daniel 11:9 So the king of the south shall come into his kingdom, and shall return into his own land. Daniel 11:10 But his sons shall be stirred up, and shall assemble a multitude of great forces and one shall certainly come, and overflow, and pass through then shall he return, and be stirred up, even to his fortress. Daniel 11:11 And the king of the south shall be moved with choler, and shall come forth and fight with him, even with the king of the north and he shall set forth a great multitude; but the multitude shall be given into his hand. Daniel 11:12 And when he hath taken away the multitude, his heart shall be lifted up; and he shall cast down many ten thousands but he shall not be strengthened by it. Daniel 11:14 And in those times there shall many stand up against the king of the south also the robbers of thy people shall exalt themselves to establish the vision; but they shall fall. Daniel 11:15 So the king of the north shall come, and cast up a mount, and take the most fenced cities and the arms of the south shall not withstand, neither his chosen people, neither shall there be any strength to withstand. Daniel 11:16 But he that cometh against him shall do according to his own will, and none shall stand before him and he shall stand in the glorious land, which by his hand shall be consumed. Daniel 11:17 He shall also set his face to enter with the strength of his whole kingdom, and upright ones with him; thus shall he do and he shall give him the daughter of women, corrupting her but she shall not stand on his side, neither be for him. Daniel 11:18 After this shall he turn his face unto the isles, and shall take many but a prince for his own behalf shall cause the reproach offered by him to cease; without his own reproach he shall cause it to turn upon him. Daniel 11:19 Then he shall turn his face toward the fort of his own land but he shall stumble and fall, and not be found. Daniel 11:20 Then shall stand up in his estate a raiser of taxes in the glory of the kingdom but within few days he shall be destroyed, neither in anger, nor in battle. Daniel 11:21 And in his estate shall stand up a vile person, to whom they shall not give the honour of the kingdom but he shall come in peaceably, and obtain the kingdom by flatteries. Daniel 11:23 And after the league made with him he shall work deceitfully for he shall come up, and shall become strong with a small people. Daniel 11:24 He shall enter peaceably even upon the fattest places of the province; and he shall do that which his fathers have not done, nor his fathers' fathers; he shall scatter among them the prey, and spoil, and riches yea, and he shall forecast his devices against the strong holds, even for a time. Daniel 11:25 And he shall stir up his power and his courage against the king of the south with a great army; and the king of the south shall be stirred up to battle with a very great and mighty army; but he shall not stand for they shall forecast devices against him. Daniel 11:26 Yea, they that feed of the portion of his meat shall destroy him, and his army shall overflow and many shall fall down slain. Daniel 11:27 And both these kings' hearts shall be to do mischief, and they shall speak lies at one table; but it shall not prosper for yet the end shall be at the time appointed. Daniel 11:29 At the time appointed he shall return, and come toward the south; but it shall not be as the former, or as the latter. Daniel 11:30 For the ships of Chittim shall come against him therefore he shall be grieved, and return, and have indignation against the holy covenant so shall he do; he shall even return, and have intelligence with them that forsake the holy covenant. Daniel 11:32 And such as do wickedly against the covenant shall he corrupt by flatteries but the people that do know their God shall be strong, and do exploits. Daniel 11:33 And they that understand among the people shall instruct many yet they shall fall by the sword, and by flame, by captivity, and by spoil, many days. Daniel 11:34 Now when they shall fall, they shall be holpen with a little help but many shall cleave to them with flatteries. Daniel 11:35 And some of them of understanding shall fall, to try them, and to purge, and to make them white, even to the time of the end because it is yet for a time appointed. Daniel 11:36 And the king shall do according to his will; and he shall exalt himself, and magnify himself above every god, and shall speak marvellous things against the God of gods, and shall prosper till the indignation be accomplished for that that is determined shall be done. Daniel 11:37 Neither shall he regard the God of his fathers, nor the desire of women, nor regard any god for he shall magnify himself above all. Daniel 11:38 But in his estate shall he honour the God of forces and a god whom his fathers knew not shall he honour with gold, and silver, and with precious stones, and pleasant things. Daniel 11:39 Thus shall he do in the most strong holds with a strange god, whom he shall acknowledge and increase with glory and he shall cause them to rule over many, and shall divide the land for gain. Daniel 11:40 And at the time of the end shall the king of the south push at him and the king of the north shall come against him like a whirlwind, with chariots, and with horsemen, and with many ships; and he shall enter into the countries, and shall overflow and pass over. Daniel 11:41 He shall enter also into the glorious land, and many countries shall be overthrown but these shall escape out of his hand, even Edom, and Moab, and the chief of the children of Ammon. Daniel 11:42 He shall stretch forth his hand also upon the countries and the land of Egypt shall not escape. Daniel 11:43 But he shall have power over the treasures of gold and of silver, and over all the precious things of Egypt and the Libyans and the Ethiopians shall be at his steps. Daniel 11:44 But tidings out of the east and out of the north shall trouble him therefore he shall go forth with great fury to destroy, and utterly to make away many. Daniel 12:1 And at that time shall Michael stand up, the great prince which standeth for the children of thy people and there shall be a time of trouble, such as never was since there was a nation even to that same time and at that time thy people shall be delivered, every one that shall be found written in the book. Daniel 12:4 But thou, O Daniel, shut up the words, and seal the book, even to the time of the end many shall run to and fro, and knowledge shall be increased. Daniel 12:6 And one said to the man clothed in linen, which was upon the waters of the river, How long shall it be to the end of these wonders? Daniel 12:8 And I heard, but I understood not then said I, O my Lord, what shall be the end of these things? Daniel 12:9 And he said, Go thy way, Daniel for the words are closed up and sealed till the time of the end. Daniel 12:10 Many shall be purified, and made white, and tried; but the wicked shall do wickedly and none of the wicked shall understand; but the wise shall understand. Daniel 12:13 But go thou thy way till the end be for thou shalt rest, and stand in thy lot at the end of the days.
2019-04-21T03:11:12Z
http://blgwd.com/iyohi/Copy_Paste_Bible/daniel.htm
Hello guys, Just to let you all know I have been Talking with Tyberius, I am in awe at his project and have always wanted to build such a robotic machine since the days of seeing Johny five, I have ordered ALL the parts to build this machine and the modded parts also that tyberius has used in his J5. Tyberius has also been very helpful in ordering the Pico ITX parts for me ,as in NZ they are not available here and the company that sells them do not ship international. So when I get everything together I will post a detailed construction with photo's. Tyberius has given me permission to make a replica of his machine.Thank you. These are the shocks that I will be using for the J5 torso up/down load movements. Go man go! Like I said, I'm very flattered that you're building J5 a brother. Hey guys this is great, Thanks so much for all your help and enthusiasm in getting this project of the ground. Thank you also Tyberius for acquiring the PC parts needed for this project. M'mmm arrived eh, One step closer. Thanks. i can understand you foaming at the mouth to get it finished I think I would to, but have learnt that doing a detailed photo album as you go is invaluable but it also takes a lot of time and patience. Jut arond the corner I too will be following 4mem8's J5 so we can have a huge family going here. I can't wait.. Plus I love what 4mem8 does with all his robots, I bow down to the master builder 4mem8. Oh, those kind words, Thank you Gw. Can't wait to start this project. Finally I have received my J5 Parts and here are a few pics before I open them up, I am looking forward to this project in conjunction with Tyberius's help. Most parts Except the ITX PC components, These coming soon. There are some parts that may not be used, But purchased them just in case I do. More to come as I start building this project. The modified design truly takes it several *really* big steps beyond the original concept. Grats! Adrenalynn: He is great, and thank you for confirming that for me, Did you get to speak with Andrew? I would love to meet him! I must set up a video link through AIM and see him. I have spoken to him on numerous occasions through AIM. It would also be nice to set up Aim with you also if you wanted to! Seriously, we had some very lively discussion, both during the show and after. It was a blast getting to meet and interact with a handful of the "usual suspects". It was a blast getting to meet everyone. I have a robo file for you, Andrew, with some additional processing that you might find useful for noise reduction (no custom dll required). Let me know if it's not self-explanatory! Also, look at markers. You should be able to simultaneously track multiple blobs of multiple colors. So if someone picks up the red ball, it'll follow it, and if they pick up the green cube, it will still follow. Do you have any pictures or video of this ? I would be nice to see him in action! I have some pictures of it, but unfortunately I dont have my micro-usb cable with me so they're stuck on my camera for the time being. Adrenalynn: That was so cool that you graduated to J5 really uncanny though, he must have read your mind, he he. Tyberious: Sorry I missed out on the video feed, Time difference was to great, I tried to contact you on Sunday, your Sat to see if we could do this. I'm at work now. Damn. Internet is horribly spotty here anyway, I even paid $20 for the 'good' internet and it constantly drops connection. M'mmm I wonder why that is!! I would have thought that a place like that would be up to scratch with the internet!! and WiFI For the games. Remember it's a building built in the late 1800's / very early 1900's. Heavy steel everywhere, lots of ionization from the ocean constantly banging on it, literally. It's just not a great place for RF. In fact, I'm thinking it's about the least desirable place I can imagine to hold an event like that. Horrendous parking, comparatively tiny building, awful for RF, in an expensive city that is almost impossible to get around in. Consider that Silicon Valley is only 45mins away, and has some of the largest hanger buildings in the world for event rental. Infinite parking, less expensive venue and lodging. Easier access. Better roads. Fewer tourists. I just don't get it sometimes. . . Adrenalynn: Ah, I now see your point, and can understand why it is difficult, it all makes sense now. Thanks for the info on that. Sorry - a poor choice of words: "Remember, ..." - you couldn't have known, and I didn't mean it that way. Mea Culpa! Adrenalynn: Your forgiven he he. I know what you meant. On your Sat night Geeky talk any interesting chat about T's J5? I have laid out the parts to look at and peruse through and get used to what they are and where they go. Also note there are NO pico ITX parts shown here, These are being sent to me by Tyberious soon. J5's Track and base parts. 7 HR5990 servos top back row. so I may have to do both at the same time. Holy crap! I never had all the parts laying out at once since I built as I went... and to think we still have all the stuff I'm sending you to add to it. Didn't realize there were so many parts. Ha ha, your right Tyberious,lots of pieces, However there are a coulpe of items that may not get used as I got these just in case i make any changes as I go. But looking forward to this project. Fitted the tracks together last night, I was disapointed to find that the nylon rods were short by 9mm after you inserted the black end plugs which means that you have each track unsupported by 1 link, All you need is about 2mm clearance for this, I had thought of replacing these but my eagerness to build it out weighed it. For the cost of J5 you think they would get this right. I never noticed that when I was building mine, but I was in a rush to get it together in my excitement. I can tell you though, I've never noticed any structural integrity issues with the tracks, they're rock solid. Solid they are, That's why I carried on. Nice tracks. Did you get my thresholding suggestion, Tyb? I think it'll clean-up your processing a bunch. Yup, I did. Still tweaking with it a bit to get it right though. I am going to do a detailed tutorial of this J5 guys so I will post the odd pic to show progress as I go, Here are a couple for starters. Crap 4mem8 I wished I would have asked you to make me a silicone mold of one of the track sections, so I could make my own, but I know this is too late because you would have to replace the buttons on the ends after removing them. Oh well, I guess I'm stuck with buying the tracks and then scratch building the rest. To be honest, the tracks are fairly inexpensive and you'd probably be happier with just buying a set of them. I would think building them from a mold wouldnt be worth the time investment when you can purchase them relatively cheap. Yeah, I would have to go along with that Tyberius, GW they are not that expensive to purchase as an individual unit. BUT I know you to well, and know you would like the challenge of making them. Ya I know the tracks are only around 20 bucks but I love to make my own parts plus I have the material for it. This is just something I enjoy to do. My J5 will be all hand made for the exeception of the treads unless I can get one peice of it. Why not carve your own mold from high-res photos? Or do your own design? I think I will make a new design. That why my J5 step child would really stand out from the rest.. Great Idea Thanks Adrenalynn!! Once I get started on the first track I'll shot a pic over to see what you all think. I just new it!! You had to make your own GW, didn't you! Then again I know you well, he he. Good luck and I will be watching. closely. Awesome! Can't _wait_ to see your results! ScuD: Thanks, I do this with all my projects in detail, helps me and others as well. 180 pics so far on this project and climbing. My web site has the full pics so far, When finished will post a tutorial here. 4mem8 How are you going to interface the Pico ITX. I have drooled over this computer board since it was first announced, thinking of the ways I could use it, (mostly in cars). I have to be honest I never thought of it as a robot controller until now and am excited to see the implementation. With the capability of this computer so real intelegence can be inserted into a machine. Please keep us informed. Apburner: Thank you for your interest in this J5, As Adrenalynn has stated Tyberius first built this modded J5 and I was also in awe at his project to the point I had to spend a lot of $'s to replicate this awesome beast [With permission I might add] I have a lot I owe to Tyberius for sharing this project with me including his time and advice. Follow the link that Adrenalynn has posted to find some amazing pics,video and explanation of construction. I have made my one with slight mods also so they are not exactly the same but similar. 4mem8,great work!.Tyberious has certainly "grown" a few more J5`s from his project,just show`s how good it was. I have this sneaking suspicion we haven't seen the tip of the proverbial iceburg yet! I think your right,i can think of two more in the "pipeline" so far.the johnny five has great appeal and is well known,that in itself makes it a popular build. Thanks for the replies, Adrenalynn and 4mem8. Like I said that copmuter is an amazing platform for many things. The size and power requirements make it useful everywhere. This use in a robot is the first time I have thought of it as a controller and it excited me. Thanks to Tyberius and everyone else on this site for a forum like this. I am one of the older folks into Tech as a hobby. I have forwarded this sites address to my 2 little nieces, (Smart girls they are), in hopes of exciting them into a hobby of electronic controllers and maybe robotics. Apburner: I to am the older generation but this robotic game keeps me feeling very young and can't wait each day to do a litle more on it. Here are some other mods that TYberius also suggested with my one, I had a long neck in the last post and Tyb suggested omitting the neck extension, So thought i would try it to see what it looks like, Not bad, The only difference with my version is that I can pan/tilt at a much lower angle due to the higher neck. What do you all think.?? personally i think the long neck looks better,and as you said pan/tilt angle is better.i am also toying with the idea of having his head tilt from side to side on mine,a fairly easy mode to do as well.love all the pics tho 4mem8 you can never have enough pictures! From a purely aesthetic point of view, I think the long neck gives it more of a sympathetic kind of look, whereas the short neck makes it look more as if it has the character of an overfed chihuahua. Thanks guys, you have confirmed my thoughts on this mod, I will put the longer neck back. Tyberius has also stated the a more 3 dimensional look might be better than the flat ASB-18 , So I had a look at this choice ASB-503 and I think would look great. ScuD: It;s amazing how the different parts you add change the look of J5, One could very well get carried away with all the options one has, And I love it. nanomole39: Pics Pics Pics, I love taking them and sharing, That's what this forum is all about. No hiding of thoughts just sharing, and this is what I have found here, I love it. Getting another 6 HSR5990 servos that I bought from Tom Chang soon, so I will replace another 5 HR645's the main up/down section of the body and Base rotate and the shoulder. I want to program these servos. So just the hands and wrist HR475's and Pan/tilt are 645's the last two are an overkill but had them anyway spare. I will have quite a bit of electronics on the head eventually. Robot maker: Thank you for your comments, The largest Nmhi batteries that Tyberius could find for me were 4200ma a 12v for motors, 9.6v 4200ma ITX, and I do have a [email protected] but may replace it with 7.2v 4200ma and include a 6v voltage reg for this pack so as to power the 6 analogue servos from the same pack. I agree that the base needs to be larger to accommodate all the electronics easier, as it is I may extend the rear to help in this area. Adrenalynn: Yes I know about the D cells I have 10 12,000ma ones for other purposes, but to big for J5. 5400ma is a good size. I should get a reasonable time from 4200ma though. May look at the 5400 for the servos at a later time. Thanks for your info. We also got a killer price point on the 4200mAh cells, which was another deciding factor. Sorry Tyberius, I almost forgot about that, Thank you for reminding me, and thank you again for finding that good price.Makes a big difference on the overall price of J5. You should post that in the links directory too. I have disassembled my J5 down to the base as I am not happy with the rotational base system, it is inadequate for the upper weight mods as the body rocks from side to side, I have drawn up a bearing system that will be rock solid using the same base but replacing the upper Disc with two 3mm Alloy discs 93mm dia, The top alloy disc will have the cut down plastic disc fitted under the top disc, The bottom disc 93mm dia is a ring with a 63mm I.D hole, This is bolted to the base. The outer edges will house the ball bearings, As the top disc is lowered on to the bearings the plastic disc under the top disc will engage the servo and the plastic disc will fit snuggly into the lower ring. This will make for a rock solid bearing. Going to the engineers shop today to price it out, At this stage will get 3 made 1 for my J5, 1 for my next Project [ You will love this rover] and a spare, I think Tyberius may be interested in the 3rd if the price is right. Robotmaker: The lazy susan is a good idea but not suitable for the J5 system if using the plastic base as I am. Well a step backwards here on my J5, I have decided to rip it apart again as I am not satisfied with the rotation base unit and I am in the process of replacing the bearing system as it is not up to the extra modded version. So here are some pics of the disassembly. In bits again. awaiting new parts. Have to do some other work on J5 whilst new bearings being made. Ok, I have pulled my J5 apart as I am not happy with a few of the designs from the stock system, so I have redesigned the upper platform to a higher level, 50mm against the stock 38mm, this allows batteries to be placed on edge and leaves room [plenty] for 5 C cells on each side of the center batteries for the 12v supply to motors in an upright position. The top plat need quite a bit of re-fabrication, but well worth the effort. Also I will be replacing the stock rotational bearing system as this was found to be rather sloppy with the added upper mods. This bearing is similar to the LBA system, It will have a 3-4mm lower alloy plate with a 93mm out dia and a 63mm inner dia and recessed for ball bearings, The upper alloy plate will be 3-4mm 93 mm dia with a central 3mm hole, The original plastic servo disc will be retained but cut down to 60mm dia and fitted to the underside of the top alloy disc, balls inserted and the two discs put together and fitted to the servo in the same manner as before. Pics so far to date. Top, original stand of posts 38mm to be replaced by 50mm posts. Masking off and cutting the Lexan upper plate to accommodate the motors. Lexan plate cut to detail. Other side cut the same. Cut two alloy angle plates 30mm x 12mm x 2mm. Top lexan plate offered up to check the motor position. Ok, here the Lexan plate is shown in it's new position, note the height difference, Original height shown by the alloy bar just below the motor. Two 3mm holes drilled in each bracket. Alloy brackets fitted to the upper Lexan plate on the bottom side. These replace the lugs that were on the edge of the lexan plate that located them in place. Close up of extra cut out for one of the DC motors as one has it's power terminals at the bottom of the motor. This cut out one side only. Two alloy brackets for the 7.2v and 9.6v batteries. Brackets in place for mounting. Batteries in place on Lexan base. Ten batteries for the 12v motor system, in two packs of five. J5 disassembled awaiting redesigned parts. More to come. Beautiful work Mike! I really like the redesign... wish I had a shop at my place. I'll probably at some point have a new base CNC'd out of aluminum, but widening the space between the decks will be my first order of business. Nice mod, 4mem! That really adds some space! Did you actually make those non-straight notched cuts with the bandsaw too? Wish I had your kind of patience! I wouldn't try to tap threads into lexan generally. Not when inserts are cheap and plentiful. I've personally put 12,000 lbs on reinforced lexan. Depends as much on engineering as on thickness, but really, what are we talking about here, 5lbs? 10lbs tops? Even medium density plastics of moderate thickness are going to hold that, and toilet paper will hold it if properly reinforced. Tyberius: Thank you for your nice comment, I thought you might like the redesign, As I went to fit the batteries and actually saw the space I thought Na this has to change. Against my will I said it has to happen, so many tears later it was in bits again. The motors are the stock motors that came with the kit 50:1 12v Have not tested the torque. I would also like a CNC machine, maybe one day. A lathe purchase is on the list first. soon. Adrenalynn: " Did you actually make those non-straight notched cuts with the bandsaw too? Wish I had your kind of patience! " Yes I did, It becomes second nature to me this sort of thing as I work all day with these machines and have minature versions at home for modeling purposes. Pleased you like the mods.I like Lexan it's cool stuff, I usually use alloy for my bases. You could even use all the spare 645MGs you'll have after upgrading to 5990TGs for the hands. The 475s just barely cut it for picking up a beer with 2 hands. A few more shots of the battery system almost complete. Bending the alloy for the battery straps. Brackets made for securing the two 5 cell packs. Final securing of central yellow wire. This mod makes for a very compact system. Tyberius: Good point and may just do that to the hands. All good points Robotmaker. Will keep in mind. Yes - I have a good Mill. You ever tried machining 20 exactly identical pieces on a Mill? That's what a CNC is for. Adrenalynn: I would love a CNC mill in my work shop, I will definatly look at this at a later stage, any good brands you can recommend??. Everything I've used has been on the larger end. Chat with DroidWorks - he's become a Sherline dealer, I understand. Cool, thanks for the info. Heard of the brand name even here in NZ. I disagree. Most robots I play with half at least half a dozen servos. That's 18 pieces, 12 identical, on a 2D CNC. If we start getting into humanoids, a hundred identical pieces is not outside the realm of reality. The other benefit CNC gives you, even when building single pieces, is the ability to machine nice curves that are otherwise difficult or impossible to do on a manual mill. Ok, just fitted the top and base back together and find that I do not have to replace the bottom SQ supports, these in conjunction with the altered top alloy brackets are just fine. So will leave as is. Top alloy brackets fitted, One small one at the rear takes place of the Lexan slot. Almost back together. Note that the motors are now recessed into the Lexan. Update to my J5, Have been busy making stand off's for the exstention acrylic plate at the rear due to lack of space I am going to tier it in three levels. Drilling some alloy stand off's for the acrylic sheet. Cutting acrylic sheet to detail. The hole is for the battery under the ITX board as I only had a certain length stand off and was slightly short in length. Wires from the 3 batteries have been terminated ready for later use.Also motor wires now fitted. ITX and Solid state drive mounted on acrylic base, The 4 stand off's are for another tier. Edge view of ITX system. Top view of ITX system. Acrylic base and ITX system mounted on the back of J5. Another pic of the mounted system. Darkback" I know where you are coming from , as there a lot of nice projects that ppl are doing here on this forum and I'de love some of them. Adrenalynn: I am using a small band saw to cut this Lexan/Acrylic sheet, The motor speed is 1600 rpm with a tooth setting of 6 per inch, Do not use a fine blade like 10T per inch as this will clog as the Lexan/Acrylic sheet heats up. 6T works great, then sand the edge with 120grit then 180 and finally 320grit and you will get a very nice finish. Also be sure to leave the paper or plastic covering on until all cutting and holes have been drilled. Yeah, the fine blade usually heats the product and the blade tends to stick to the sheet.The 6T clears the plastic without melting. Masking is good to use, I use it a lot on plastics. Speed isn't as critical as the TPI. More on the J5 saga. Underside of my ITX, cut out is for the ITX battery. Next tier for the ITX P/S and battery switch bank. Three switches and Power LEDs for 12v,9.6v and 7.2v. Switch bank and ITX P/S mounted on top of the HDD. Top view of next stage, One more layer to go which will have the USB and serial connectors. I've had pretty decent results using a rotozip, or dremel...I have both. They do melt the plastic, and sometimes it does crack...but when it doesn't crack, you can polish the edge with torch if your careful. A few more pics, These are the start of the wiring process whilst I wait for the Alloy rotational base bearing to be made. Acrylic sheet for the top plate,left cut out is for the serial cable, center is for the ITX cables and right is for the ITX power cables. Power cables for 12v,9.6v and 7.2v. Resistors for the status LEDs. Acrylic sheet drilled for cables. Saber tooth motor driver installed. Power cables being routed to their correct area's, not the 3 fuse blocks lower right. and Saber tooth motor driver at left. Wiring looms taking place and being secured. Fuses out and checking the LED status lights. Thank you so much for your kind comment Adrenalynn, This is my standard when it comes to finishing my bots and is just as important as the mechanical side of things. Just a pity that I do not have the skills in programming. If Yes - Is it supposed to shift around? I Echo Adrenalyn's Praise's there is allot of attention to detail you have put in there! Your project just looks better and better each time you post an update. BADfish10: Hey thanks so much for your kind words, I am only to pleased to share all of these projects with you all, All of your encouragement keeps me going. Adrenalynn: nothing wrong with the flow chart scenario in construction, It's a good idea. And yes I will have to do something about programming, Not sure where I will find the time though. Tyberius, this is how it works. When one make a master peice such as your J5 another picksup where you left off. As said by the rockets that brought man to the moon, we are now in the process in building a craft to to take man to Mars. Everything that is made can always be made better and when 4mem8 completes his J5 someone will go further than he has done, this is the only way to keep improving the J5 until he is perfect, who knows AI may be implanted in him next we'll just have to wait and see. Oh, all these kind words about J5, Just remember that if it wasn't for Tyberius I wouldn't have got this far with my project, I still have a long way to go before I get him working and when it comes to the software it could slow down big time, We will have to see.But all of your encouragement will probably see me through. Thanks again all. 4mem8, it is good to know where your heart is. Remembering Tyberious and always giving him the credit where credit is due. You are truly one of a kind. Keep up the great work on J5! Oh, gees, more praise "blushes" Thanks Gw for more kind words. It's great here how everyone helps out others in need. If I can repay just a fraction of what I have received here on this forum That will make me happy. Alloy bracket for the three charging jacks. Wiring up the charging jacks. My new 8" TFT touch screen, A well needed piece of equipment, will come in handy for J5 O.S installation. Thats too bad, with your great programming skills you could really put a huge addition to the J5 software. I'll second that one GW. There would be no end to what J5 could do. Acrylic sheet for the final layer, This will house the serial port and USB ports. Cut out for serial port. Holes drilled for hex rod. Hex stand off's for top layer of acrylic. 4mm Acrylic sheet, I have decided to increase the length of the front rotational section by 50mm, This will offset the rear electronics area and keep it in balance. Front extension and rotational base. You could well be right GW, nothing is set in concrete yet, just trying it out. I may yet leave it in it's right position but just raise it 50mm instead, That way it will be in the same position but just higher to clear the rear electronics. USB ports with alloy brackets for mounting to acrylic sheet. Man that is just amazing! As always 4mem8 great stuff! Keep it up until we need our shell for our Mech! this is looking fantastic... a very sweet looking J5!!! Hey guys, Thanks for the support, I will try to keep the snippets coming as I progress, There will of course be a tutorial when finished, Photo's so far to date amount to 250 pics and rising. Very nice work Ty looks great. Oops..lol In that case looks great 4mem8, great work. Thanks Droid Works, Just noticed the mistake myself. easy done no problem. What regs are you going to use? LM 338? 5amp or something different? Your attention to detail is awesome! Thanks for all posts as you progress on your build. Robot maker: I prefere the jacks on the side of J5, Also space is at a premium at the rear, I had space on the side. Yes I have three fuses for, 12v, 7.2v and 9V I will be using two ubecs rated at 10amps each for ther 7.2v section as I want a stable 7.2v for my HSR5990 servos. When fully charged it reaches 8.4v to high for those servos. The ubecs come pre set for 5v, but with software and a castle link it can be changed to what ever you want. My purchase of two 10 amp Ubecs and a programming cable to change the output voltage. The default output is 5.1v, I will change this to 7.2v for my digital servos and analogue servos 6v. I am having to split time between my J5 and ED 209 now so pics will be spread between the two. Also I have to finish my Wall-E off. Thanks Robot maker, Cool I will look out for your posts. Ha ha, I thought you might answer that Adrenalynn, Ok,If I start I really need to have the hardware as well so I can implement it, Most of my hardware gear is P-basic orientated and not C, Except my Pico ITX set up Which is not up and running yet. M'mmmm That sounds like me Robot maker, I do have a Phidgets 8/8/8, Two 4 port USB's, Two 4 port servo controllers a huge range of GD2P sensors, No Max sonars or 12c to USB or Axon though, Although I should be able to use my PICO ITX M/B with roborealm and USB parts, or is the Axon better suited for the job? Also with Roborealm I take it you can access the sensors, I.R and Max sonars?. Not directly. Those sensors don't have any way to talk to a Pico ITX without some kind of interface. Adrenalynn, Does the Axon talk to the sensors through roborealm, Or do you have to have an interface as well?. Last I checked, there aren't drivers for the Axon to talk to roborealm yet. I just did a TCP connection between them. Great job can't wait to see the video.. Getting the body back together. Have just made a short video of the rotational base, so will post as soon as it is uploaded. A short video of my new rotational base. Sorry for the poor quality. I can't seem to reduce the video size, as it appears to be quite large, this accounts for the poor quality. robot maker: I have not fired anything up yet so cannot answer your question, It will not be ready for about a week or so, Just wiring up the LED mouth and a few other bits to tidy up first. Ha ha, good junk though, good junk. Constructed the LED mouth circuit this afternoon and now fitting it, The eyebrow servos are next, I think these will work out quite well. Robot maker, Here are a couple of sites, This one uses 10 LEDS per channel for a wider mouth, Mine used 5 per channel for a smaller mouth. http://www.canakit.net/ 5 LEDs per channel This is the one I am using for my J5, if your J5 is bigger I would go for the 10 per channel. More pics tonight of my J5's LED mouth construction, Had a good day building the circuit and fitting it to the head. With a neat mod to the housing. That's it until next week, When I will look at the eyebrows and mini servos to control them. The video of the base was very smooth, good to see that, as of the mouth I'm glad I sent you the tri-bot's mouth as an inspration for your J5 mouth. It turned out great 4mem8. The LED mouth was a combination of Tyberius's design and the Tribots design, so until it's fired up I am not sure if it will do as planned, But I think it will. Particularly in the dark.Thanks GW and Tyberius. Inspiring work as always, 4mem8; every step a work of art. I'd show you some of the solder joints I did today but they'd probably make you sterile. metaform3d: Ha ha that was funny, I bet your solder joints arn't as bad as you say. Also thanks for your comments. I love taking the phgoto's as much as I love building bots. Robot maker: I think I will have them going inside to out as volume increases, As this will give the illusion of a big smile when sound is louder. As for the chip, I have always usde the LM 3915 and 14 but this is a different chip, can;t recall the number, but if you check above in my links it will take you to the site where I got my one from and it states there the chip Canakit.com. Project for this weekend is to fabricate the eyebrows, remove the bolts from the camera's and replace with nylon ones, install two 10amp ubecs in parallel for 20 amps for the 7.2v digital servos and reduce to 6v for the analogue servos. These ubecs can be programmed with the castle link which is plugged into your PC and programmed for the voltage you want, default is 5v. ESC programming adapter for Windows! programs your ubec from your PC. Is this assuming that I am using all my servos 10-5990's, 6-645's, 4-475's I may have to change the wire rating from the ubecs as I feel this is not going to be heavy enough for 20 amps. Wires should be fine - connectors, most certainly not. M'mmm not so sure Adrenalynn, If you look at the twin cable it is lighter than the servo wire. In should be ok but out will go to the SCC32 to be distributed amongst all those servos.And the wire from the ubec to the SCC is pretty light. I could do that, Just means I have to order another one which is a pain, being in NZ freight kills the price you pay for it. I could get GWJax to send me another one [Florida] that would cut freight costs. 5. Can the CC BECs be run in parallel for very high current applications? M'mmm I did not spot that one Robot maker Thanks for the find, you saved me a headache. So it looks like what you said before, one ubec for 5 5990 servos and one for the Analogue servos. +1 rep for you. What ever you need 4mem8 just let me know and I get it for you!! Thanks GW, I know I can always count on you if I need these items, Both you and Tyberius have been outstanding in this area of my robotics. Thank you both. So much for getting my eyebrows done on J5 over the weekend, I ended up taking pics for the competition robot that is coming up which took most of the weekend. So, see what I can get done this weekend. that's cool, maaaaan now I want to build a J5, but the evil one with the laser..........sweeeeetneeeesss. can't wait to see the step by steps. Ensui Malik: Are you refering to my J5 or the competition coming up?? If J5 I have a tutorial here of the construction so far to date. The fun competition for another type of bot starts on Oct 1st. So I am flicking between both at present. Good to see you back. RobotMaker I don't think 4mem8 is planing on running the J5 on any carpet and If he does I'm sure he eill correct this problem, most likely by putting a thin layer of plastic sheet over the screws or replacing them with flat head or round head screws. Remember this is just a work of art that may need to be moded down more. Thanks for your comments on T-! robotmaker. wow been along time since i worked on mt j5 , had sone issues at home and out of a job for 3,5 months, so only just starting to get back into it and will take a bit of time to see where i last left of . so don't expect any pics soon ok but will get there. You back in the saddle work-wise? Good to see you back! Good to have you back brother. but hopefully we can chat soon. Need to also catch up on what u been doing .
2019-04-20T06:54:27Z
http://forums.trossenrobotics.com/archive/index.php/t-1802.html?s=ed9938040ca9f9254bd0f1d7fe8efaa6
​A huge thank you to Jesse Christensen (and his beautiful wife Emma) for performing at our Evening in the Tropics event at the Conservatory on Saturday, January 19, 2019. Jesse has generously donated his talents on a number of occasions, as illustrated in the attached photos. He has a wonderful, mellow voice, a large repertoire of songs including Rock, Pop, Folk, Blues and some original tunes, and plays guitar, ukulele and harmonica. Guests are always caught up in his music - singing, dancing and even playing along. He is warm and personable and loved by everyone, including the young ones. If you're ever in need of a musician, he has our highest recommendation. Sunshine & Snowflakes, a Prelude to the Season was held at the Centennial Botanical Conservatory on Saturday, December 1, 2018. It was an enjoyable and colourful event with music provided by the Thunder Bay Chinese Erhu Musicians and a visit from the Thunder Bay Jane Austin Society added a special touch of nostalgia. The Friends of the Thunder Bay Conservatory want to express our sincere gratitude to the Chinese Erhu Musicians and to Jesse Christensen and friend for sharing their musical talents. Thanks to Linda Beadow for baking dozens of delicious cookies that the children had such fun decorating. And to the many volunteers who offered their time, talent and baking skills to make the afternoon such a success, we express our deepest gratitude. Many thanks to the Conservatory staff for their dedicated care of the Conservatory's botanical collection, and to the City of Thunder Bay for providing us with this tropical treasure. About three hundred visitors of all ages attended "Ghosts & Goblins in the Garden" at the Conservatory on Saturday, October 27, 2018. Adults and children alike were dressed in spook-tacular fashion. A big thank you to our wonderful volunteers who baked delicious treats, served refreshments, greeted guests at the front door, assisted with the scavenger hunt and helped with set-up and clean-up. A special thanks to Jesse Christensen and friend for their musical contribution. Thank you also to the Conservatory staff for all their hard work in maintaining the Conservatory and its valuable botanical collection. September 13, 2018, more than 30 children and adults gathered to harvest veggies from the Children's Garden at the Conservatory. The garden was planted by the children in May and included onions, potatoes, carrots and marigolds. The harvest, as well as the spring planting, are much loved annual events at the Conservatory and began in 2016. Thanks to all the enthusiastic children and adults who participated. A huge thank you to our wonderful volunteers who helped water and care for the gardens over the summer, and to those who helped with the harvest. ​The Doors Open movement began in France in 1984 and spread throughout Europe then North America, Australia and elsewhere. It became a national program developed by Heritage Canada, and Ontario was the first province on board in 2002, with Newfoundland and Labrador, Manitoba, Saskatchewan, Alberta and British Columbia following suit. It aims to raise awareness about a community’s built and cultural heritage. On Saturday, September 8th, the Centennial Botanical Conservatory was one of 17 sites participating in Doors Open Thunder Bay 2018, and the Friends of the Thunder Bay Conservatory hosted the event. Historical photographs were on display throughout the facility that showed the beginnings of this gem in our midst. Volunteers, Donna McKay, Joy Pangrac, Ming Wu and Susan Prince, and Master Gardeners, Lynda Bobinski, Holly Rupert, and Carole McCollum, were on hand to provide information and answer any questions. ​The 3rd annual Children's Garden Planting Afternoon was held Saturday, May 26, 2018 in the Conservatory Community Garden. The children planted potatoes, carrots, onions and marigolds. Big thanks to our volunteers and to all the children and adults who participated. Thursday afternoon, April 19, 2018, the Friends hosted “Afternoon Tea With Friends” at the Conservatory. We were honoured with the presence of flautist Marinda Tran who performed exquisite music to the delight of everyone present. We extend our deepest appreciation to Marinda. Many thanks to our volunteers who helped with setup, cleanup, refreshments and so much more – Debbie Hatzis, Janice Horgos, Pearl Lunn, Donna McKay, Sean Murphy, Allen Nunn and Joy Pangrac. Thank you also to everyone who provided delicious baking for the event – Catherine Atkinson, Emese Boyko, Betty Heath, Judy Nesbit, Allen Nunn and Joy Pangrac. We are grateful for the hard work and dedication of Conservatory staff in caring for our beautiful and much-loved botanical garden. The Spring into Life event held March 18, 2018 at the Conservatory was a happy and successful gathering. Conservatory beekeepers, Rudy and Lois Kuchta were in attendance with a fascinating bee and honey display. Ren d’Esterre developed and presented an interactive children's program on food plants growing in the Conservatory. Paige Ready assisted. Ren’s engaging presentation included fruit sampling and was a highpoint of the day. The wonderful musical talents of Andrew Coates, Eleanore Wieser and Emma Rudahigan were appreciated by everyone. ​Many thanks to our faithful crew of volunteers who baked, greeted guests, manned the refreshment and registration tables and helped with setup and take down – Catherine Atkinson, Susan Barnes, Allan Hall, Evelyn Kinsman, Kathryn Loftus, Pearl Lunn, Judy Nisbet, Allen Nunn, Janet O'Connor, Joy Pangrac, Leslie Schelling and Carol Turgeon. We once again commend the Conservatory staff for their dedication to and care for our city's beautiful Botanical Conservatory. The enthusiastic support of the citizens of Thunder Bay who attend our events and regularly visit the Conservatory continues to encourage us in our efforts. Watch for Afternoon Tea with Friends on Thursday, April 19. ​On Sunday, January 21, 2018, the Friends of the Thunder Bay Conservatory once again hosted An Afternoon in the Tropics, much to the delight of all who attended. Music was provided by local artist, Stephanie Skavinski, an international plant search provided entertainment and challenge to children and adults alike and tropical punch, coffee and delicious baking were served to the appreciative crowd. Thanks so much to our bakers and everyone who helping with set up, clean up, the refreshment table, children’s activity, and front door. A very special thank you to Stephanie for entertaining us with her beautiful music. And we cannot forget to acknowledge the wonderful Conservatory staff for the care they devote daily to maintaining the beautiful Conservatory collection and facility. ​On Saturday, November 18, the Friends of the Thunder Bay Conservatory hosted a celebration of the 50th anniversary of the Centennial Botanical Conservatory. Over 400 guests of all ages were in attendance. Friends Co-Chair, Sharon Sidlar, introduced special guest speakers: MP Don Rusnak; Councillor Frank Pullia, representing the Mayor's Office and the City of Thunder Bay; and Mike Dixon, Supervisor of Forestry and Parks. Friends Historian, Monika McNabb, introduced Rob McCormack, former Secretary Manager of the Fort William Board of Parks Management, who spoke of the development and early history of the Conservatory. A beautiful anniversary cake was cut by Rob McCormack and Sharon Sidlar. ​Cake, cupcakes, coffee and punch were served to a very appreciative crowd. Talented musicians, Glenn Jennings and Jesse Christensen, entertained visitors. Children coloured birthday cards and hung them proudly in the entrance to the Conservatory. Conservatory staff had decorated the Conservatory with anniversary balloons among the beautiful plants and blossoms. They presented guests with a piece of Conservatory history in the form of a little potted ivy plant from the original ivy growing in the Seasonal Room since the opening in 1967. We extend sincere appreciation to our special guest speakers. Thanks so much to Glenn Jennings and Jesse Christensen for sharing their wonderful music with us. Thanks to all our dedicated volunteers who assisted throughout the day. Very special thanks go to the hard working and dedicated Conservatory staff who daily care for this botanical treasure. The Friends of the Thunder Bay Conservatory would also like to express our sincere gratitude to the members of the Lakehead Antique Car Club for their generous donation presented at the 50th Anniversary Celebration. A free Children's Garden Planting Afternoon was held Saturday, June 17, 2017. The children planted climbing beans, peas, potatoes, carrots, onions and more. Thanks to all the children and adults who participated. Special thanks to Karen Nadaeu and the staff at the Conservatory for seeding and growing bedding plants for the Children's Garden. ​Thursday afternoon, June 1, 2017, the Friends of the Conservatory hosted a Tulip Blooming Celebration at the Dutch-Canadian Friendship Tulip Garden on the grounds of the Centennial Botanical Conservatory. Marilyn Stinson gave introductory remarks on behalf of the Friends followed by addresses from Jeanetty Jumah of the Dutch Canadian Society and Mike Dixon, Supervisor of the Conservatory. The Macgillivray Pipe Band opened and closed the event. Special thanks to Karen Nadeau and the Conservatory staff for designing and preparing the maple leaf themed tulip bed and to our dedicated group of volunteers who baked and served at the event. Thank you to all who assisted and participated. We are grateful to the Canadian Garden Council for choosing our Conservatory as a recipient of a Friendship Tulip Garden, Vesey’s Bulbs for their special gift of 700 red and white tulip bulbs, and to all those organizations that helped make the program possible. Our garden is one of 140 Friendship Tulip Gardens planted across Canada in 2015 to celebrate the gift of 100,000 tulip bulbs sent to Canadians by the Dutch royal family in 1945. The gift of tulips is a symbol of appreciation for Canada's hospitality to the members of the Dutch royal family during the Second World War and the major role that Canadian troops played in the liberation of the Netherlands. ​Come dig, plant and learn! Sunday, May 28, 2017, the Master Gardeners Thunder Bay, in partnership with the Friends of the Conservatory, presented the first of two free educational garden sessions in the Conservatory Community Garden, titled Come Dig, Plant & Learn. The first session covered composting basics, veggie garden planning, and straw bale and raised bed garden preparation. On Sunday, June 11, the Master Gardeners demonstrated veggie garden planting. We are so grateful for all the information shared by Holly and Hazel of the Master Gardeners. Thank you to all who participated. Special thanks to Karen Nadeau and the Conservatory staff for seeding and growing the wonderful veggies for the Community Garden. On Sunday March 19, 2017, the Friends of the Thunder Bay Conservatory welcomed over 300 visitors of all ages to our third annual Spring Into Life celebration at the Conservatory. Close to 100 children were in attendance and they happily searched for ladybugs among the tropical foliage, planted sweet pea seeds to take home, filled themselves with goodies from the refreshment table and sang, danced and played along with the ever-popular and personable Jesse Christensen. Thanks so much to our dedicated volunteers who assisted with the afternoon, Evelyn Kinsman, Theodora and Debbie Hatzis, Lesli Schilling, and friendly greeter, Alan Hall. It wouldn’t be the same without the wonderful baking generously provided by Catherine Atkinson, Allen Nunn, Judy Nisbet, Margie Atkinson Parker, Lesli Schelling, Susan Barnes, Linda Beadow and Kathryn Loftus, greatly enjoyed by all, especially the little ones. Thanks again to Jesse Christensen and friend for entertaining young and old with their songs and different instruments. And, as always, our sincere gratitude to the Conservatory staff for all the work they do to maintain and present the beautiful botanical collection. Details of upcoming events will be posted on this website, the Friend's Facebook page and Twitter account. On Sunday afternoon, February 12, 2017, the Friends celebrated Valentine’s Day by hosting Love is in Bloom at the Conservatory. Over 200 people of all ages enjoyed the warmth, the fragrant air, homemade baking and live music amid red powder puffs, amaryllis, and lemon, orange, banana, papaya and mango trees. Thank you to all who came out on this blustery winter’s day. We are very grateful to Jesse Christensen and his musical companion for sharing their talents and encouraging the children to play instruments along with them. Jesse had young and old alike singing and dancing in the aisles! Many thanks to our volunteer bakers, Kelly Legros, Jennifer Wyma, Susan Barnes, Catherine Atkinson, Evelyn Kinsman, Joy Pangrac, Kathryn Loftus and Allen Nunn, who continue to support our events with their superb desserts. Special thanks to Linda Beadow who baked 108 heart shaped cookies for the children to decorate. We are also grateful to our other great volunteers who help ease the workload in staging these events, Joy Pangrac, Janet O'Connor, Evelyn Kinsmen, Theodora and Debbie Hatzis, Lesli Schilling, and our friendly greeter, Allan Hall. As always we truly appreciate the city’s Conservatory staff for their support and for the love and care they give daily to this amazing botanical collection. Thunder Bay's Centennial Botanical Conservatory was rocking to the music of DJ/producers Aticka, Jader Ag, and Matt Migz on Saturday afternoon, January 21, 2017. Local performer, producer and DJ, Matt Migliazza, partnered with the Friends of the Thunder Bay Conservatory to present the event, simply titled The Green House. The afternoon was a great success with more than 300 people attending, many of them young people enthusiastically dancing in the aisles. We are so grateful to the performers and especially to Matt Migliazza for arranging the event. Thank you to our wonderful volunteer bakers, Allen Nunn, Catherine Atkinson, Emesha Boyko, Evelyn Kinsman, Jennifer Wyma, Joy Pangrac and Susan Barnes for providing the sweet treats. They were a big hit. Our tropical punch was provided by Allen Nunn. Thanks to Allan Hall, Catherine Atkinson, and Joy Pangrac for all their help. We also want to express our sincere gratitude to Tyler at the Conservatory for the valuable assistance he has given to the Friends over the past months. He is going to be missed. The executive of the Friends of the Thunder Bay Conservatory has received reports that a local radio station announced that "The Green House" event, held in partnership with Matt Migz, was limited to those 19 years of age and over and that ID was required. The Centennial Botanical Conservatory is a city owned public space and all of our events are open to everyone. "The Green House" had over 300 people of all ages in attendance, from infants to seniors. We deeply regret that misinformation was aired and hope that it didn't keep anyone from attending. The second annual Starlight & Snowflakes, a Prelude to the Season was held at the Centennial Botanical Conservatory on Saturday, December 10, 2016. The tropical warmth and flora, the seasonal displays, twinkling lights, hot apple cider and delicious baking delighted young and old. Children had fun decorating snowflake and snowman ornaments and posing for photographs in Santa's sleigh. Music by Jesse Christensen and friend had us singing and dancing along. The door prize was won by Nathan P. The Friends of the Thunder Bay Conservatory want to express our sincere gratitude to Jesse Christensen for providing the evening's musical entertainment. A huge thank you to our volunteers, Joy Pangrac, Carol Turgeon, Kathryn Loftus, Evelyn Kinsman, Allen Nunn and our new executive member, Cassandra Eckman. Thanks so much to the folks who donated baking: Jennifer Wyma, Kathryn Loftus, Sandra and Allen Nunn, Evelyn Kinsman, Catherine Atkinson, Margie Parker, Joy Pangrac, as well as the young woman and her family who brought goodies along to the event! Many thanks to the Conservatory staff for their beautiful seasonal displays and for their dedication and care of the Conservatory and its collection. And last, but not least, thank you to the City of Thunder Bay for providing this Centennial legacy, a welcoming refuge of tropical warmth and beauty for all. November 18, 2016, was the 49th Anniversary of the opening of the Centennial Botanical Conservatory in Thunder Bay. Due to warnings of imminent severe winter weather and out of concern for public safety, the Friends of the Thunder Bay Conservatory cancelled planned celebrations. Sincerest gratitude goes out to the Conservatory staff and to our wonderful volunteer bakers for all their hard work in preparation. We have been informed by credible sources that while the blizzard raged outside, the fairies and gnomes partied like it was 1967. Happy 49th birthday Centennial Botanical Conservatory! Photos courtesy of Monika McNabb, Kathleen Ott, Katherine Caroline and Sandy Nunn. On Wednesday evening, August 31, 2016, children and parents came together to harvest vegetables from the children's gardens at the Conservatory. The gardens were planted by children in June and the abundant harvest included onions, potatoes, squash and tomatoes. The children chose a selection of fresh vegetables for their families and the remainder was left at the Conservatory for visitors to take home. The community and straw bale garden, located to the west of the Conservatory, will continue to be watered and maintained until all the vegetables are harvested. We encourage you to check out these gardens while visiting the Conservatory and if something is ripe, please help yourself! Thank you to all our volunteers who helped to build, plant, water and care for the gardens over the summer and to the Master Gardeners for their assistance. Members of the executive spent a very warm afternoon on National Garden Day, June 17, 2016, planting in the Community Garden at the Conservatory. On a warm and sunny Sunday, June 12, 2016, the Conservatory grounds were bustling as the planting of the community garden got underway. Thank you to everyone who came out to help and learn about different gardening techniques including straw bale, raised bed, rain gutter and container gardening. Special thanks to Holly, Hazel, Kim, Susan and Ralph from the Thunder Bay Master Gardeners. We truly appreciate all their expertise and support! Thank you also to our other volunteers on Sunday: Kathy, Joy, Theodora, Debbie, and to Jennifer, whose cookies were yummy. Special thanks to our very talented construction crew, Rohan, Allen and Werner as well as to Emesha, Sean, Kathryn and John for their assistance with bale placement and conditioning. A community garden project of this scale could not have happened without the tremendous support and assistance from the Forestry Division and Conservatory staff, so deepest gratitude to them as well. The children had so much fun and did a marvellous job planting flowers, potatoes, and onions. We hope they will return regularly to check on the progress of the five raised beds they planted. We encourage everyone to take a stroll around the Conservatory grounds this summer, have a picnic, relax on a park bench and feast your eyes on the pollinator garden, the bee hives and our interesting and beautiful community garden. We will post on the Friends Facebook page when veggies are available to be shared and enjoyed! On Friday, May 6, 2016, Friends of the Conservatory volunteers joined Forestry and Conservatory staff members for EcoSuperior's annual Spring Up to Clean Up campaign working to tidy up the Conservatory grounds as well as the road and entranceway to the Chapples Golf Course trails. The event cleared up a winter's worth of litter on a sunny afternoon and strengthened bonds between staff members and the Friends group. Thanks to Kathleen Ott, Chair of the Friends of the Conservatory for the delicious home baked cinnamon rolls. On Thursday April 21, 2016, the Friends hosted an afternoon tea at the Conservatory that was enjoyed by approximately 300 people. Many of the visitors were from local assisted living facilities and people with disabilities for whom this was a welcome tropical get-away on a cool spring day. Many thanks to the Conservatory staff for their help in dressing the Conservatory up for the event and to our ground crew of volunteers – Allan Hall, Allen Nunn, Sean Murphy, Evelyn Kinsman, and Catherine Atkinson. Compliments abounded on the delicious baking that was provided with the refreshments and we greatly appreciate this significant contribution to help make our events a success. Thank you to all the bakers. As always, we commend the Conservatory staff for their dedication to, and care of, our beautiful botanical garden. The gardening basket door prize was won by Doreen L-S. For information on past and future Friends’ events and activities, please check out our website or watch for postings and updates on Facebook, Twitter and Instagram. Spring into Life on Sunday, March 20, 2016, was another successful event hosted by the Friends. Thanks to the good work and dedication of Conservatory staff, tulips, hyacinth, lilies and hydrangea added beauty, colour and fragrance to the well kept tropical plants some of which were also blooming. Children had fun planting nasturtium seeds to take home and of course enjoyed all the wonderful treats at the refreshment table. Our greeter, Allan Hall with the Horticultural Society gave out sweet pea plants which hopefully survived the cold journey to their new homes. Many thanks to Cassandra Eckman and Debbie Hatzis for helping with the children’s activity, to our talented bakers, Jennifer Wyma, Kathryn Loftus, Mike and Peggy Scott, Nancy Serediak, and Catherine Atkinson for providing sweet delights and to Susan McMillan, Theodora Hatzis and Kathryn Loftus for helping with the refreshment table. Special thanks to Jesse and Emma Christensen for sharing their musical talent and encouraging children to play along with them. The attendance prize, tickets to Exploding Gardening Myths: Separating Fact From Fiction, donated by the Thunder Bay Master Gardeners, was won by Nicole M. We continue to be grateful for the amazing and enthusiastic support of the citizens of Thunder Bay who now visit the Conservatory often and attend our events. Watch for Afternoon Tea with Friends on Thursday, April 21st and other exciting activities being planned for spring and summer! Allan Hall handing out sweet pea seedlings. Freya and her newly planted nasturtium. Sunday afternoon, February 28, 2016, the Friends of the Conservatory participated in Science North's "Science in the City" at the Bora Laskin Building at Lakehead University. We had an instructional display from last summer's very successful straw bale demonstration garden and we also helped our young visitors plant sweet pea seeds. We promised that we would provide more detailed guidelines for growing them and transplanting them in the spring. The following website has very good instructions: Horticulture Magazine - Starting Sweet Peas from Seed. *Note that since the sweet pea seeds were planted in paper cups they can be planted outdoors without removing the pots/cups so as not to disturb the roots. Four hundred visitors braved the bitter cold on February 13, 2016, to attend Valentine's Eve at the Conservatory. What amazing support! A big thank you to Matt Flank for sharing his musical talents and to his brother Dave who took over when Matt needed a break! It was wonderful to see and hear the children singing along! We also want to thank the following for their baking contributions: Jennifer Wyma, Evelyn Kinsman, Megan Clark & Stirling McIntosh, Lorna Olson, Nicole Croes, Peggy Scott, Susan Barnes, Nancy Serediuk, Jeanette & David Lightwood, and Kathryn Loftus. Deepest gratitude to our wonderful volunteers: Susan, Marlene, Debbie, Theodora, Evelyn, Sean, Allen and Werner. Thank you also to the Conservatory staff for decorating and presenting the tropical house in such a fine manner and especially to Tyler for all his help that evening! The Attendance Draw was won by Theresa H., visiting from North Bay. Watch for information about our upcoming event “Spring into Life” to be held Sunday, March 20th. On Sunday, January 17, 2016, the Friends of the Thunder Bay Conservatory hosted their first public event of 2016, the second annual celebration of An Afternoon in the Tropics. Despite the frigid temperatures Thunder Bay responded with great enthusiasm and more than 400 people of all ages attended. Over 100 children participated in a scavenger hunt identifying Conservatory flowers and fruit. The attendance prize, a potted pony tail palm, was won by Stirling M. Thank you to all the hard working volunteers who helped make this event such an outstanding success, including Sharon Hyder and Janet O'Connor at the refreshment table and Evelyn Kinsman at the registration table. Thanks also to Jennifer Wyma, Evelyn Kinsman, Sandy and Allen Nunn, Sharon Sidlar, Kathryn Loftus and Monika McNabb, for their donations of delicious baked treats. As always, the Conservatory staff must be acknowledged with gratitude for the daily care they devote to maintaining the beautiful Conservatory collection and facility. Starlight & Snowflakes, a Prelude to the Season. Starlight & Snowflakes, a Prelude to the Season was held at the Centennial Botanical Conservatory on Saturday, November 28, 2015, with close to 200 people of all ages coming out to enjoy this beautiful facility. The aroma of hot apple cider wafted through multicoloured blossoms and twinkling lights, mingling with the sounds of live music by Lucanus Pell. Seventy three children had fun decorating delicious homemade star cookies and posing for photographs in Santa's sleigh. Our beautiful door prize was won by Marie M. The Friends of the Thunder Bay Conservatory want to express our sincere gratitude to Lucanus Pell for providing the evening's musical entertainment. Thanks to Linda Beadow for baking dozens of star cookies for the children to decorate and to the many volunteers who offered their time and talent to make the evening such a success. Many thanks to the Conservatory staff for the gorgeous seasonal displays and for their dedication to the care of the Conservatory and its collection, and to the City of Thunder Bay for providing us with this treasure. Our Conservatory is the only public greenspace in all of Northern Ontario where families can spend a chilly winter evening relaxing in a warm, tropical environment. Photos courtesy of Sean Murphy, Kathleen Ott & Allen Nunn. The Friends of the Thunder Bay Conservatory were honoured to host the Dutch-Canadian Friendship Tulip Garden Planting Ceremony at the Centennial Botanical Conservatory Thursday afternoon, October 15, 2015. The Macgillivray Pipe Band opened the event with the Port Arthur Branch #5 Royal Canadian Legion providing the Colour Party. Kathleen Ott, Chair of the Friends of the Thunder Bay Conservatory gave introductory remarks followed by addresses from Mayor Keith Hobbs; Jeanetty Jumah with the Dutch Canadian Society of Thunder Bay; Mike Dixon, Supervisor of the Conservatory; and Captain George Romick with the Lake Superior Scottish Regiment. They spoke about the rich history that Canada and the Thunder Bay area share with the people of Holland; about the young soldiers of the Lake Superior Regiment (Motor) who served in Holland during the war, some making the ultimate sacrifice; about the many war brides and the new immigrants who followed our soldiers home. Mayor Hobbs, Jeanetty Jumah, Mike Dixon and Karen Nadeau (Lead Hand at the Conservatory), Sandra Nunn (Friends of the Thunder Bay Conservatory), Captain Romick and Kim Treichler (Port Arthur Branch #5 Royal Canadian Legion), each planted a tulip bulb in memory of the first gift of 100,000 tulips from the Dutch Royal Family. Kathleen Ott and the Macgillivray Pipe Band closed the event and guests were invited to plant a tulip and visit the Conservatory for refreshments. We are grateful to the Canadian Garden Council for organizing the program and choosing our Conservatory as a recipient, Vesey’s Bulbs for their special gift of 700 red and white tulip bulbs, and to all those organizations that helped make the Friendship Garden program possible. Our sincere thanks to Mayor Hobbs, Captain Romick, Jeanetty Jumah, Mike Dixon, the Macgillivray Pipe Band and the Port Arthur Branch #5 Royal Canadian Legion Colour Party for assisting with the program. Special thanks to Karen Nadeau and the Conservatory staff for designing and preparing the maple leaf themed tulip bed, the Holland Bakery for providing the delicious dainties, and our dedicated group of volunteers. The Friends of the Thunder Bay Conservatory will be hosting a Blooming Celebration in the spring of 2016 and invite everyone to join us again to see the garden in full splendour. The Friends of the Thunder Bay Conservatory in collaboration with the City of Thunder Bay participated in the second annual Exploring our Routes, September 20th, 2015. The event was designed to promote active living, outdoor recreation, local food and family fun by exploring the walking and biking paths to and from the International Friendship Gardens and the Centennial Botanical Conservatory. The Conservatory featured indoor and outdoor activities including a honey bee display with Conservatory beekeepers Rudy and Lois Kuchta and the Thunder Bay Beekeepers' Association; straw bale and general gardening advice with Thunder Bay Master Gardeners; live music by Steph Skavinski, an international plants scavenger hunt for the children and free refreshments. We want to express our sincere gratitude to all those who volunteered and participated in making the day such a success. Photos courtesy of Monika McNabb. On Sunday, June 14th, 2015, the Friends of the Thunder Bay Conservatory, in collaboration with the Conservatory Staff and the Master Gardeners, celebrated GardenOntario Week by hosting a Community Straw Bale Garden demonstration on the Conservatory grounds. Kathleen Ott and Holly Rupert researched the process and "conditioned" the bales in advance of planting. Kathleen provided handouts and spoke to attendees about the "conditioning" process, the planting method and the benefits of growing in this novel raised bed. With the help of participants, eighteen different varieties of seedlings, two types of bean seeds, white onion bulbs along with pansies and marigolds were planted in the bales. Not to be left out, the children in attendance were provided the opportunity to plant a flower to bring home. Our sincere thanks to Conservatory Staff, as well as Holly Rupert and Ralph Bullough, for their help and expertise. Special thanks to Sasi Spring Water for donating water. Our gratitude, as always, extends to our wonderful volunteers. Please visit the Conservatory grounds throughout the summer to watch the progress of the Community Straw Bale Garden situated just west of the main building. For more pictures and information check out our Friends Facebook page. A more detailed documentation of the process will also be added to the "What's Happening" page of our website. Please watch our Facebook page and the website for future events. On the evening of May 13, 2015, a group of volunteers from the Horticultural Society and the Friends of the Conservatory teamed up to weed and tidy the perennial bed on the west side of the Conservatory grounds. This is the second year we have jointly undertaken the task. Teamwork at its finest! Volunteers gathered again on the mornings of May 19 and May 26 to continue the weeding. A path was cleared through the flower bed to the beehives and the Conservatory will install mulch to cover the path and the bee yard. Clean-ups continued through the summer. Photo courtesy of Kathleen Ott. On Sunday, May 10th, 2015, a very well-attended Mother's Day Celebration was held at the Conservatory with more than 320 guests of all ages welcomed over the 3 hour period. This was the 5th Friends sponsored event and the attendance was the highest we have had yet. Free refreshments were served to the appreciative crowd and 70 children took part in planting a nasturtium to take home to mom. The door prize was won by Rosa Carina. Thank you to our wonderful volunteers for making this event possible and to the Conservatory staff for their hard work and dedication. A very special thank you, as well, to the City of Thunder Bay for providing this wonderful heritage venue. On Friday, May 1st, 2015, Friends of the Conservatory volunteers were paired with Forestry and Conservatory staff members to clean up the Conservatory grounds and Dease Street through to the Chapples Golf Course trail as part of EcoSuperior's Spring Up to Clean Up campaign. We were delighted that the event served to spruce up the immediate area and helped develop ties between our volunteers and the hard working staff members. On Friday, April 10th, 2015, the Friends of the Thunder Bay Conservatory hosted a novel two hour program promoting the health and well-being of the community - Soothe Your Body and Soul at the Conservatory. On a pre-registered, no-charge basis, ten minute seated massage sessions were provided by Geoff Medwid RMT and Michelle Reinelt RMT, guided relaxation sessions were lead by yoga instructor Melissa Tempelman and joint mobility routines were conducted by Kathryn Loftus RMT. The natural therapeutic environment provided by the Conservatory enhanced the experience for all involved. We want to express our sincere gratitude to Geoff Medwid, Michelle Reinelt, Kathryn Loftus and Melissa Tempelman for volunteering their professional services to the community. Thank you also to those who assisted with the evening and to Conservatory staff and management for their excellent stewardship of Thunder Bay's Centennial Botanical Conservatory. Watch our web page or the Friends of the Thunder Bay Conservatory Facebook page for details on upcoming events. The Friends of the Thunder Bay Conservatory welcomed 250 visitors, young and old alike, to a celebration of spring at the Conservatory on the afternoon of Sunday, March 29th, 2015. In spite of a very snowy start to the day, the Conservatory was filled with spring blossoms and happy people. Our sincere gratitude to the Conservatory staff for all the work they do to maintain and showcase the collection. Thanks so much to our volunteers who assisted with the afternoon and to the bakers who filled our refreshment table with sweet delights, including the talented Jenn Riley of Cake! Thunder Bay. The children's scavenger hunt was very popular, with each young participant taking home Easter treats for their efforts. We appreciate the discount provided by the Frederica Street Bulk Zone. The attendance draw, a gift certificate generously donated by Bill Martin's Nurseryland, was won by Jennifer MacDonald. Watch our web page or the Friends of the Thunder Bay Conservatory Facebook page for details on upcoming events. Photo courtesy of Linda Ryma Photography. On Saturday, February 14th, 2015, the Friends hosted a very successful Valentine’s Evening at the Conservatory. Over 130 guests escaped the frigid night air to spend two hours in Thunder Bay's own tropical paradise, a magical experience. Thank you to the dedicated Conservatory staff for creating and maintaining that magic. Our sincere gratitude extends to the wonderful volunteers who assisted with the evening. Special thanks also to Jake Vaillant and Noles Dennhardt for sharing their musical talents! And praise and gratitude to Glen, of European Bakery; Ashley, of A&A Baking; Donnalee Morettin and Allen Nunn for the baked treats. The winner of the beautiful gift basket from Rollason Flowers was Heather Lozinski. John Jordan and Grant Merkley both won hanging baskets from the Conservatory. Thank you to Rollason Flowers and the Conservartory staff for their generous donations of prizes. The children's scavenger hunt book draw was won by Jordan, age 9, and Kaia, age 6. Please watch for more upcoming events and don't forget the Friends of the Thunder Bay Conservatory General Meeting, March 3, 7–9pm at the Mary J.L. Black Public Library. On Sunday, January 18th, 2015, the Friends hosted our very first public event at the Conservatory, An Afternoon in the Tropics. This event was an outstanding success with 300 people of all ages in attendance. The Conservatory and its plants looked magnificent. We commend the dedication and care provided by Conservatory staff members and thank them for all their help in preparing for and hosting the event. Thank you also to the talented Kyle Shushack for his musical performance. The adult door prize, which included a gift certificate for Waking Giant Coffee, compliments of Jay Stapleton, was won by Andrea K. The children's scavenger hunt book prizes were won by Maya (9) and Michael (2). Photos courtesy of Allen Nunn. Thank you to all who volunteered their time in making this inaugural event a success. Please check our website for future Friends events and activities.
2019-04-20T09:19:46Z
https://136317745924965352.weebly.com/past-events.html
Caveolae, flask-shaped invaginations of the plasma membrane, are particularly abundant in muscle cells. We have recently cloned a muscle-specific caveolin, termed caveolin-3, which is expressed in differentiated muscle cells. Specific antibodies to caveolin-3 were generated and used to characterize the distribution of caveolin-3 in adult and differentiating muscle. In fully differentiated skeletal muscle, caveolin-3 was shown to be associated exclusively with sarcolemmal caveolae. Localization of caveolin-3 during differentiation of primary cultured muscle cells and development of mouse skeletal muscle in vivo suggested that caveolin-3 is transiently associated with an internal membrane system. These elements were identified as developing transverse-(T)-tubules by double-labeling with antibodies to the α1 subunit of the dihydropyridine receptor in C2C12 cells. Ultrastructural analysis of the caveolin-3– labeled elements showed an association of caveolin-3 with elaborate networks of interconnected caveolae, which penetrated the depths of the muscle fibers. These elements, which formed regular reticular structures, were shown to be surface-connected by labeling with cholera toxin conjugates. The results suggest that caveolin-3 transiently associates with T-tubules during development and may be involved in the early development of the T-tubule system in muscle. The plasma membrane of mammalian cells is divided into a number of different structural and functional microdomains. Much recent interest has been focused on one such domain, the caveola, a surface invagination with unique morphology which is readily identifiable by electron microscopy (Parton, 1996). Caveolae are extremely abundant in endothelial cells, adipocytes, and smooth muscle cells. In endothelia, caveolae appear to play a major role in transport across the endothelial monolayer (Ghitescu et al., 1986; Schnitzer et al., 1994). Other work has suggested a role for caveolae in signal transduction (Lisanti et al., 1994), in specialized endocytic uptake pathways (Anderson, 1993), and in calcium homeostasis (Fujimoto, 1993; Fujimoto et al., 1992). The diversity of the proposed functions of caveolae raises the question of whether they have a single function, as does a clathrin-coated pit, or whether they are structural units used for many different purposes. Caveolae are enriched in cholesterol (Montesano et al., 1982; Rothberg et al., 1990) and in glycosphingolipids (Parton, 1994), and increasing evidence suggests that caveolae are built up around sphingolipid–cholesterol rafts (Simons and Ikonen, 1996). The plasma membrane of all mammalian cells appears to contain such rafts which, upon detergent treatment, can be isolated as insoluble glycosphingolipid-enriched complexes (DIGs;1 Parton and Simons, 1995). DIGs and caveolae share many features, but caveolae appear to be more restricted in distribution, being undetectable in some cell types (Fra et al., 1994; Gorodinsky and Harris, 1995). Caveolin-1, the major protein of caveolae in mammalian cells (Kurzchalia et al., 1994; Parton, 1996), is a 21-kD integral membrane protein which has been shown to bind cholesterol and to interact with glycosphingolipids (Fra et al., 1995a; Murata et al., 1995). Heterologous expression of caveolin-1 in cells lacking caveolae causes formation of surface invaginations with many of the features of caveolae (Fra et al., 1995b). The de novo produced caveolae were the same size and shape as endogenous caveolae, and cross-linked glycosylphosphatidylinositol-anchored proteins were shown to be concentrated within these invaginations. These results suggest that caveolin-1 has the capacity to interact with DIGs and create the characteristic caveolar invagination and to generate a microdomain with a distinct lipid composition (Parton and Simons, 1995). Recent work has identified two additional caveolin family members which share many structural features with caveolin-1 (Way and Parton, 1995; Scherer et al., 1996; Tang et al., 1996). The role of caveolin-1 and other caveolins is not restricted to generating and maintaining caveolar structure. In vitro studies have shown specific functional interactions with trimeric G protein α subunits (Li et al., 1995). These interactions may hold the G protein in an inactive state on the cytoplasmic face of the membrane. We have recently cloned and characterized a novel muscle-specific homologue of caveolin-1, termed M-caveolin or caveolin-3 (Way and Parton, 1995). Expression of caveolin-3 is induced upon muscle differentiation, and mRNA is undetectable in undifferentiated C2C12 cells (Way and Parton, 1995; Tang et al., 1996). Caveolin-3 is ∼60% homologous to caveolin-1 with the major differences occurring in the NH2-terminal portion of the protein. The identification of a muscle-specific caveolin protein raises the question of the specific role of this protein, and of caveolae generally, in muscle. Caveolae of muscle cells (also termed subsarcolemmal vesicles or microvesicles) have been studied using a host of morphological techniques (Gabella, 1978). Several different hypotheses were proposed for the role of caveolae in muscle function and development. The three main theories proposed roles for caveolae in (a) provision of extra membrane during muscle extension (Prescott and Brightman, 1976), (b) calcium entry (Popescu, 1974), and (c) in the formation of the transverse (T)-tubule system during muscle development (Ishikawa, 1968). The first “stretch” theory has largely been discounted by subsequent studies suggesting that caveolae do not contribute membrane during muscle extension in vivo (Dulhunty and Franzini-Armstrong, 1975; Gabella and Blundell, 1978). However, the second theory, proposing a role for muscle caveolae in calcium intake or homeostasis, remains relevant, as recent studies have shown a striking concentration of two putative calcium-regulating molecules within caveolae of several different cell types, including muscle cells (Fujimoto, 1993; Fujimoto et al., 1992). The third theory, invoking a role for caveolae in formation of the T-tubule system, was suggested from early studies of the development of the T-tubule system in cultured myotubes (Ezerman and Ishikawa, 1967; Ishikawa, 1968). The T-tubules are an extensive surface-connected system of membranes which develop and maintain a protein and lipid composition distinct from the sarcolemma (for review see Flucher, 1992). Elegant morphological studies suggested that T-tubules form from the repeated budding of caveolae. Initially the caveolae formed short, beaded, tubular structures, and then these developed into extensive, three-dimensional networks which appeared to be formed from interconnected arrays of caveolae (Ezerman and Ishikawa, 1967; Ishikawa, 1968). Later studies of mouse muscle development in vivo showed a similar association of caveolae-like structures with forming T-tubules, at an early developmental stage (Franzini-Armstrong, 1991). These results were consistent with studies showing that inhibition of T-tubule formation resulted in an accumulation of caveolae-like structures (Schiaffino et al., 1977). This model is still favored by many investigators as some caveolar components are shared with T-tubules (Yuan et al., 1991). However, other studies using membrane-impermeant lipid probes also provided evidence for an internal T-tubule compartment which subsequently fuses with the sarcolemma (Flucher et al., 1991). In the absence of specific markers for muscle caveolae, all of these models have been difficult to test. We have examined the localization of caveolin-3 in C2C12 cells, primary mouse muscle cultures, and mouse skeletal and cardiac muscle during development in vivo. We show that caveolin-3 is associated with the T-tubule system of developing muscle. We postulate that caveolae and specifically, caveolin-3, may play a role in the formation of the T-tubule system of muscle. In addition, we speculate that the underlying principles of organization and formation of T-tubules and caveolae may be similar. Media and reagents for cell culture were purchased from GIBCO BRL (Eggenheim, Germany). C2C12 cells were cultured as described previously (Way and Parton, 1995). Transfection was carried out using Lipofectin ( GIBCO BRL) according to the manufacturer's instructions, using caveolin-3-HA in the CB6 vector described by Way and Parton (1995). Cell lines were selected using G418 ( GIBCO BRL). Colonies were subcloned and analyzed for caveolin-3 expression by immunofluorescence, as described below. Primary cultures of mouse muscle were prepared from 18-d embryos exactly as described by Chu et al. (1995). Briefly, muscle fibers from the limbs of 18-d post coitus embryos were incubated in trypsin and the cells dissociated by trituration. Debris was removed by filtering the cells through gauze, and the dissociated cells were plated on matrigel ( GIBCO BRL) or on calf skin collagen ( Sigma Chemical Co., New South Wales, Australia) according to manufacturer's recommendations. A peptide corresponding to the 15 NH2-terminal amino acids of mouse caveolin-3 but, with the addition of a COOH-terminal cysteine residue and a bridging glycine residue, was synthesized (CGMTEEHTDLEARIIKD). The cysteine residue was used for coupling to activated keyhole limpet hemocyanin before injection into rabbits using the Imject Activated Immunogen Conjugation Kit ( Pierce, Rockford, IL). The conjugated peptides were separated from free peptide using a Presto Desalting column (Pierce) and the concentration of the pooled conjugates determined using the Bio-Rad Laboratories (Richmond, CA) assay with BSA as a standard. Antisera were affinity purified on a column prepared by coupling the peptide through the cysteine residue (Harlow and Lane, 1988). Affinity-purified antibodies were characterized by immunofluorescence and Western blotting; in each case a specific signal was only detected in differentiated C2C12 cells, and this signal was competed by preincubation of the antibody with the specific caveolin-3 peptide. Antibodies to the NH2 terminus of caveolin-1 (VIP21-caveolin) have been characterized previously (Dupree et al., 1993). Mab427 to the α1 subunit of the dihydropyridine receptor (DHPR) was purchased from Chemicon Intl., Inc. (Temecula, CA). Antibodies to cholera toxin were raised in rabbits using fixed cholera toxin binding subunit ( Sigma Chemical Co.) as immunogen. C2C12 cells were grown on glass coverslips coated with laminin ( Sigma Chemical Co.) according to manufacturer's instructions. Cells were either fixed in cold methanol or in paraformaldehyde and then permeabilized with 0.1% saponin, as described previously (Parton et al., 1994). After immunolabeling cells were mounted in Mowiol (Hoechst, Frankfurt, Germany) and examined using fluorescence microscopes (Axiovert; Zeiss, Inc.). Confocal microscopy was performed using the EMBL confocal microscope or the confocal microscope (model MRC600 head and laser; BioRad Laboratories; Axioscope, Zeiss, Inc.; used at the Vision, Touch and Hearing Research Centre University of Queensland, Brisbane, Australia). In each case, optical sections were 0.5 μm in the z-plane. Mice were killed by cervical dislocation, and small pieces of muscle from the leg or atrium were rapidly excised and fixed by immersion in either 8% paraformaldehyde in 100 mM phosphate buffer, pH 7.35, or the same fixative containing 0.1% glutaraldehyde. Muscle pieces were embedded in gelatin and were then infiltrated with polyvinylpyrrolidone/sucrose overnight and processed for ultrathin, frozen sectioning (Griffiths, 1993). Semithick (0.5–1 μm) and ultrathin sections (50–60 nm) were cut on a Leica Ultracut with FCS attachment. Thick sections were transferred to polylysine-coated coverslips and were then labeled immediately or stored until needed, at −20°C with identical results. Labeling of ultrathin sections for electron microscopy or thick sections for light microscopy was performed as described previously (Lütcke et al., 1994; Parton et al., 1989). C2C12 cells were labeled with cholera toxin binding subunit (10 μg/ml; Sigma Chemical Co.) or with cholera toxin binding subunit/horseradish peroxidase (10 μg/ml; Sigma Chemical Co.) at 4°C as described previously (Parton, 1994), except that all incubations were increased to 2 h to allow time for diffusion into all surface-connected compartments. Labeling of frozen sections or embedding in Epon were performed as described previously (Parton, 1994). Grids were viewed using an electron microscope (model 1010; Jeol, Japan) in the Centre for Microscopy and Microanalysis, University of Queensland (Brisbane, Australia). As a first step to understanding the function of caveolin-3, we examined its localization in muscle tissues. A peptide corresponding to the NH2-terminal portion of caveolin-3, a region of the protein which is not shared with other members of the caveolin family (Way and Parton, 1995), was used to immunize rabbits, and the resulting antiserum was affinity purified on a peptide column. The affinity- purified antibody (anti-cav3-N) recognized a band of ∼20 kD, which was present in differentiated but not undifferentiated C2C12 cells, and was competed by the specific peptide to which it was raised (results not shown). We investigated the distribution of caveolin-3 and caveolin-1 in fully differentiated skeletal and cardiac muscle cells. Skeletal muscle tissue from adult mice was fixed and processed for semi-thick or ultrathin sections. Sections were labeled with anti-cav3-N, followed by fluorescent second antibodies or protein A–gold. Transverse or longitudinal muscle sections of adult skeletal muscle tissue showed strong peripheral staining of the muscle fibers for caveolin-3 (Fig. 1, A and B). At the ends of muscle fibers, a clearly organized network of labeling was apparent (Fig. 1 A). Immunogold labeling of ultrathin sections showed that caveolin-3 was localized to sarcolemmal caveolae with low labeling of the intervening membrane or the interior of the muscle fiber (Fig. 2). Caveolin-1 was undetectable in the myofibers by immunofluorescence (Fig. 1, C and D) but showed a strong labeling of endothelial cells (Fig. 1 C). Similar results were obtained with cardiac tissue. Double labeling immunoelectron microscopy of thin, frozen sections from mouse atrium with antibodies to caveolin-1 and caveolin-3 confirmed the lack of colocalization of the two proteins; caveolin-3 was restricted to caveolae of cardiomyocytes, whereas caveolin-1 was only detected in capillary endothelia (Fig. 3). In contrast to skeletal muscle where caveolin-3 was only observed in association with sarcolemmal caveolae, caveolin-3 positive caveolae were observed to be associated with both the plasma membrane and the T-tubules of cardiac muscle (not shown). Immunolocalization of caveolin-1 and caveolin-3 in adult mouse skeletal muscle. 0.5 μm frozen sections of mouse skeletal muscle (A– D) were labeled with antibodies to caveolin-3 (A) or caveolin-1 (C). B and D show the corresponding phase images for A and C, respectively. (A) Caveolin-3 labels the periphery of the fiber with negligible internal staining. Note the regular meshwork of labeled elements at the ends of the muscle fiber. (B) In contrast to caveolin-3, antibodies to caveolin-1 specifically label the endothelial cells of muscle capillaries rather than the muscle fibers. Bars, 5 μm. Immunoelectron microscopic localization of caveolin-3 in adult mouse skeletal muscle. Ultrathin frozen sections of mouse skeletal muscle were labeled with antibodies to caveolin-3. Specific labeling is associated with sarcolemmal caveolae (small arrowheads indicate the sarcolemmal region; large arrowheads indicate labeled caveolae), as shown at higher magnification in the inset. An endothelial cell (e) in B is unlabeled. Double arrowheads indicate regions of the T-tubule system which generally show negligible labeling. m, mitochondria; sl, sarcolemma; n, nucleus; z, Z-line. Bars: (A and B) 200 nm; (inset) 100 nm. Immunoelectron microscopic localization of caveolin-3 and caveolin-1 in adult cardiac tissue. Ultrathin frozen sections of mouse atrium were labeled with antibodies to caveolin-3 and caveolin-1. Small gold in A represents labeling for caveolin-3 and in B, indicates labeling for caveolin-1. Caveolin-1 labeling (large arrows) is only detectable on caveolae of endothelial cells (e), whereas labeling for caveolin-3 is only evident within the cardiac muscle cells, showing the specificity of the two antibodies. In each case, labeling is associated with uncoated plasma membrane invaginations with the characteristics of caveolae. The arrowhead in A indicates an unlabeled clathrin-coated pit. p, plasma membrane. Bars, 100 nm. Earlier work has suggested a role for caveolae in the formation of the T-tubule system. We therefore examined the localization of caveolin-3 during muscle development in vivo and in vitro. Electron microscopy studies have shown that the T-tubule system of mouse skeletal muscle starts to develop in the period before birth (Franzini-Armstrong, 1991). At this time T-tubules are orientated in the longitudinal direction along the length of the fiber. After birth the T-tubule system is reorganized to form the transverse arrangement characteristic of adult tissue (for review see Flucher, 1992). We therefore examined the distribution of caveolin-3 in mouse skeletal muscle between embryonic day 16 and 3 d after birth by immunofluorescence (Fig. 4). In embryonic mouse leg muscle, strong labeling for caveolin-3 was detectable around the periphery of muscle fibers. In addition, at early stages of development, labeling was apparent within punctate structures throughout the cell (Fig. 4 A). A characteristic feature of the labeling at this developmental stage was lines of regularly spaced puncta close to the surface membrane (Fig. 4 A). Internal staining was particularly striking in 18-d embryonic muscle. At this stage, clearly defined tubules, which apparently extended from the sarcolemmal region into the muscle fiber, were labeled by caveolin-3 antibodies (Fig. 4 B). Such structures are reminiscent of forming T-tubules. However, available antibodies to the DHPR gave low labeling of embryonic muscle tissue, consistent with a low expression level before birth (Morton and Froehner, 1989). Internal labeling for caveolin-3, although decreased, was also evident in newborn mouse muscle (Fig. 4 C). In 3-d-old mice, internal labeling was barely detectable (Fig. 4 D), consistent with the lack of internal labeling in mature muscle. Immunolocalization of caveolin-3 during muscle development. 0.5 μm frozen sections of mouse skeletal muscle were fixed by immersion at various stages of embryonic development (embryonic day 16, A; embryonic day 18, B), immediately after birth (C) or 3 d after birth (D). All sections were labeled at the same time, and images were prepared with the same exposure and development times. Specific labeling is associated with the periphery of the muscle cells at all stages. In addition, internal labeling is evident from embryonic day 16 up to birth. Labeling is often evident as punctate dots of labeling aligned along the longitudinal axis of the muscle cells (A, arrows). In the embryonic day 18, muscle tubular structures, which apparently lead from the sarcolemma, are labeled (B, arrows). In the newborn muscle and especially 3 d after birth, the internal labeling is decreased. Bars, 5 μm. The above results suggest that caveolin-3 associates with an internal compartment during muscle development. To investigate this process further, we varied the use of cell culture systems to study caveolin-3 during muscle development. We examined caveolin-3 distribution in two wellcharacterized culture systems: the C2C12 mouse myoblast/ myotube cell line and primary cultured mouse muscle cells. C2C12 cells are a well-characterized model system for studies of muscle differentiation which, in the differentiated state, express caveolin-3 (Way and Parton, 1995). The distribution of caveolin-3 in C2C12 cells was examined using the affinity-purified anti-cav3-N antibodies and immunofluorescence microscopy. After fusion of C2C12 cells to form myotubes, caveolin-3 antibodies labeled an extensive tubular network within the cytoplasm (Fig. 5). As shown by confocal microscopy, the tubules are present within the depth of the muscle fiber, extend over many micrometers, and form a complex branching network throughout the cytoplasm. Labeling was specific for caveolin-3, as it was inhibited by the specific peptide to which the antibody was raised (results not shown). In addition, identical labeling was observed in a C2C12 cell line expressing epitope-tagged caveolin-3 (see Fig. 12). Immunolocalization of caveolin-3 in C2C12 cells. C2C12 cells were maintained in culture in differentiating medium for 6–8 d after reaching confluency. Cells were fixed with paraformaldehyde and processed for immunofluorescent localization of caveolin-3. Cells were viewed by confocal microscopy. A and B show two sections of two C2C12 myotubes at different planes through the cell: (A close to the base of the cell; B midway through the cell). Tubules and reticular structures (inset) run throughout the entire depth of the cell and are mainly orientated in the longitudinal direction. Specific labeling is associated with the differentiated C2C12 cells, whereas the underlying layer of undifferentiated cells (in the plane of the section shown in A) show low labeling. Bars, 5 μm. We next examined the nature of the labeled organelles. In view of the morphology of the labeled elements we used antibodies to a well-characterized T-tubule marker, the α1 subunit of the DHPR (α1-DHPR). As shown in Fig. 6, the two markers showed a high degree of colocalization, as analyzed by confocal microscopy. Some peripheral structures were labeled by antibodies to caveolin-3 but not antiDHPR, but, most of the tubular internal structures were labeled with both markers. The labeled elements were predominantly orientated in the longitudinal direction, characteristic of T-tubules in incompletely differentiated muscle cells in vivo (Franzini-Armstrong, 1991). C2C12 cells maintained for longer periods in culture (>10 d), when some of the myotubes showed spontaneous contractile activity showed lower internal labeling (not shown). Caveolin-3 colocalizes with a T-tubule marker in C2C12 cells. C2C12 cells were cultured as described in the legend to Fig. 5. Cells were fixed with paraformaldehyde and double labeled for caveolin-3 (rhodamine, A, C, E, and G) and for the T-tubule specific marker α1DHPR (FITC, B, D, F, and H). Caveolin-3 colocalizes with α1-DHPR in many tubular/reticular structures throughout the cell (arrows). Note that not all caveolin-3 positive structures are α1DHPR positive (arrowheads). Note the variation in the labeling for α1-DHPR (for example see the relatively low level of labeling of the cell in F) but the clear colocalization with caveolin-3. Bars: (A–D) 5 μm; (E and F) 2.5 μm. To investigate the association of caveolin-3 with putative T-tubule elements in a more physiological culture situation, we examined the distribution of caveolin-3 in primary cultured mouse muscle cells. Primary cultured myotubes have been used extensively for studies of muscle differentiation and T-tubule formation and show a defined sequence of development in culture (Flucher et al., 1992). Muscle cells from the limbs of 18-d-old mouse embryos were dissociated by trypsin treatment and cultured for up to 24 d in vitro. Three days after plating, the culture medium was changed to differentiation medium, and two days later the cells started to show spontaneous contractile activity. Cells at different developmental stages were labeled for immunofluorescent detection of caveolin-3 and viewed by confocal microscopy. From the first day after plating, caveolin-3 was detectable in a small number of putative myoblasts (Fig. 7 A). Labeling first appeared in the perinuclear region of these cells. As the cells differentiated more, caveolin-3 labeling was observed in the cell periphery (Fig. 7, B and C). After fusion of myoblasts to form myotubes, caveolin-3 labeling greatly increased and labeling appeared within an extensive system of tubular/reticular elements which penetrated the entire cytoplasm (Fig. 7, D–F) and appeared identical to the T-tubule labeling of C2C12 cells. Consistent with the studies of developing skeletal muscle tissue with longer times in culture an increasing number of cells was observed to show predominantly peripheral staining with a reduction in the level of internal T-tubule labeling (Fig. 7 G). The results show that in both primary cultured muscle cells and C2C12 cells, caveolin-3 associates with developing T-tubules. In differentiated cells and in mature muscle, caveolin-3 is no longer detectable within the T-tubule system but is associated with sarcolemmal caveolae. Immunolocalization of caveolin-3 during differentiation of primary muscle cells in culture. Mouse muscle cells from embryonic day 18 were cultured for various periods at 37°C before fixation and labeling for caveolin-3, as described in Materials and Methods. Cells were fixed 1 day after plating (A) or at various times after adding differentiation medium; 1 d (B and C), 5 d (D), 11 d (E and F), and 24 d (G). Cells were viewed by confocal (A–E and G) or conventional microscopy (F). Specific labeling is associated with the perinuclear and peripheral regions of day 0 myoblasts (A–C). From day 5 onwards, labeling is apparent within the putative T-tubule reticulum of fused myotubes (e.g., compare F with Fig. 12 F). At later stages, an increasing number of cells show surface labeling but low intracellular labeling (G). n, nuclei. Bars: (A–C) 5 μm; (D– G) 10 μm. The distribution of caveolin-3 in differentiating C2C12 cells was further examined by immunoelectron microscopy. C2C12 cells were fixed with glutaraldehyde and then processed for frozen sectioning. Sections were labeled with affinity-purified anti-cav3-N antibodies. Negligible labeling was found in undifferentiated cells, but in fused cells, caveolin-3 labeling was associated with elaborate networks of interconnected membranes apparently deep within the cytoplasm of the myotubes (Fig. 8). The networks were composed of regular repeating units which, at low magnification, almost had a crystalline appearance. Labeling for caveolin-3, although low, was apparent over these entire networks and was not detectable on other intracellular membranes or the plasma membrane. Labeling appeared to be higher over the periphery of these networks and in the less tightly clustered regions. In these areas the individual units of the reticulum were apparent and, as shown in Fig. 8 B, strongly resembled caveolae. The dimensions of the individual units of the reticulum (Fig. 9 A, inset) suggest that these networks may be composed of fused caveolae or caveolae which have formed repeatedly but have not been released through a fission event. Immunoelectron microscopic localization of caveolin-3 in C2C12 cells. C2C12 cells were cultured as described in the legend to Fig. 5 and then fixed with a glutaraldehyde-containing fixative. The cells were then processed and immunolabeled for caveolin-3 followed by protein A–gold. Note the specific labeling (gold particles indicated by arrowheads) of extensive regular interconnected arrays of labeled membranes (asterisks). These structures are made up of unit structures with similar dimensions to caveolae (asterisks mark units of a reticulum in the inset). In the less compact arrays, (shown at higher magnification in B), the labeling is stronger, apparently due to greater access to caveolin-3 epitopes, and the individual caveolae-like elements are clearly evident (arrows). Note the similarity of these structures to caveolae clusters of nonmuscle cells. Bars, 200 nm. Immunoelectron microscopic localization of caveolin-3 in C2C12 cells. C2C12 cells were cultured and processed for frozen sectioning as described in the legend to Fig. 8, except that the cells were fixed in paraformaldehyde. Sections were immunolabeled with affinity-purified antibodies to caveolin-3. Specific immunolabeling for caveolin-3 is associated with 50–60 nm budding profiles with characteristic caveolar morphology (arrowheads). The caveolin-3 labeled elements form complex, extended arrays which penetrate the center of the muscle cell. The complex, clustered arrangement of these structures, as viewed in thin sections, suggests that in three dimensions, the caveolae form large clusters resembling “bunches of grapes.” In some regions, reticular elements are evident (Fig. 4 B, asterisk) which resemble forming T-tubules, as described in early morphological studies (Ishikawa, 1968), but are less compact and regular than those seen in glutaraldehyde-fixed cells. Note that the caveolin-3 labeling is typically associated with the bud-like profiles rather than the tubular interconnecting regions. The double arrowhead indicates a clathrin-coated pit. Bars, 200 nm. We examined this further in paraformaldehyde-fixed cells. As shown in Fig. 9, the reticular structures appeared to be less well preserved under these fixation conditions, as they were less extensive and more irregular. However, labeling for caveolin-3 was clearly increased compared to glutaraldehyde-fixed cells and was associated with large clusters of caveolae-like elements. Specific labeling was invariably associated with caveolae-like profiles and not with any intervening flat membrane. The labeled structures extended over several micrometers into the center of the cells (Fig. 9). The labeled elements also had associated clathrin-coated pits, consistent with their identification as T-tubule elements. The similarity of the caveolin-3–labeled elements to the previously described precursor T-tubule elements (Ishikawa, 1968) and the colocalization of caveolin-3 with a T-tubule marker by confocal microscopy, strongly support the idea that caveolin-3 associates with developing T-tubules. However, the available antibodies to the DHPR gave weak labeling by electron microscopy. To confirm that these elements were surface connected, we incubated differentiating C2C12 cells with peroxidase-labeled cholera toxin binding subunit (CT-B) at 4°C, a temperature at which endocytosis is blocked. The plasma membrane receptor for cholera toxin, GM1, has previously been shown to be enriched in caveolae (Montesano et al., 1982; Parton, 1994). Surface-labeled cells were fixed and embedded in Epon. Semi-thick sections were cut parallel to the culture substratum. As shown in Fig. 10, CT-B–peroxidase labeled the cell surface as well as long tubules and reticular structures throughout the cell. The labeled tubular elements were preferentially orientated in the longitudinal direction, characteristic of developing T-tubules. Both the tubular and reticular elements (Fig. 10, insets) showed the characteristic morphology of “fused” caveolar elements and appeared similar to those labeled with caveolin-3. We then repeated the above experiment using unlabeled CT-B and processed the cells for frozen sectioning. Thawed sections were labeled with antibodies to caveolin-3 and CT-B. As shown in Fig. 11, labeling for CT-B was associated with the cell surface and also with the caveolin-3 positive putative T-tubule elements. These results confirm that a large number of the labeled elements are indeed surface connected. Taken together, the results suggest that during muscle differentiation, caveolin-3 associates with developing T-tubules. The developing T-tubule system takes the form of a reticulum made up of individual units with morphological features and components characteristic of caveolae. These observations are consistent with the involvement of repeated caveolae formation in T-tubule development. Cholera toxin peroxidase labeling of differentiating C2C12 cells. C2C12 cells cultured as described in the legend to Fig. 5 were incubated with CT-B-peroxidase for 2 h at 4°C and then fixed and processed for embedding in epon. Semi-thick sections (∼200 nm) were cut parallel to the substratum. Peroxidase-labeled elements are seen within the depths of the cell (arrows). The labeled structures are composed of individual bud-like elements of ∼60 nm diam, which form chains or reticula of interconnected structures (insets). Bars, 200 nm. Immunoelectron microscopic localization of caveolin-3 in cholera toxin surface-labeled C2C12 cells. C2C12 cells cultured as described in the legend to Fig. 5 were incubated with CT-B for 2 h at 4°C before fixation with paraformaldehyde. Ultrathin frozen sections were immunolabeled with antibodies to cholera toxin, followed by 15 nm protein A–gold (arrowheads) and to caveolin-3 detected with 10 nm protein A–gold. Cholera toxin labeling is evident on the cell surface and in the extensive caveolin-3–positive tubulovesicular elements. pm, plasma membrane. Bars, 200 nm. Finally we sought to examine the localization of caveolin-3 with respect to caveolin-1. We have previously shown that caveolin-3 and caveolin-1 colocalize when expressed in fibroblasts. To examine the distribution of the two proteins in myoblasts, which express caveolin-1 but not caveolin-3 (Way and Parton, 1995), a stable C2C12 cell line containing caveolin-3 with a COOH-terminal HA tag was generated. We examined the distribution of epitope-tagged caveolin-3 and of caveolin-1 during differentiation by double labeling with antibodies to the NH2 terminus of endogenous caveolin-1 (Dupree et al., 1993) and the 12CA5 antibody against the HA tag. In undifferentiated C2C12-CAV3HA cells 24 h after plating there was clear colocalization of caveolin-1 and epitope-tagged caveolin-3 at the cell periphery (Fig. 12, A and B). Therefore, in these undifferentiated muscle cells, as in fibroblasts (Way and Parton, 1995), caveolin-3 is apparently directed to caveolae. In contrast, 48 h after plating, but before fusion into myotubes, a number of cells started to show a different labeling pattern for caveolin-1 and caveolin-3 (Fig. 12, C and D), with caveolin-1 generally showing a more diffuse labeling pattern. In fused C2C12-CAV3HA cells epitope-tagged caveolin-3 was localized to the T-tubule system running throughout the cytoplasm (Fig. 12, E, F, and H). In contrast, caveolin-1 showed negligible labeling in the differentiated cells (Fig. 12 G), suggesting that the expression of caveolin-1 is reduced upon muscle differentiation. It therefore appears that caveolin-3 expressed in C2C12 cells colocalizes with caveolin-1 in the nondifferentiated state, but, as the cells differentiate, the two markers are separated. Caveolin-1 is sorted away from caveolin-3, which eventually associates with the T-tubule system. Immunolocalization of epitope-tagged caveolin-3 and endogenous caveolin-1 in C2C12 cells. C2C12 cells expressing caveolin-3 with a COOH-terminal HA tag were fixed after 1 d (A and B), 2 d (C and D), or after 6–8 d, 4–6 d after the cells reached confluency and differentiation medium was added), (E–H). The cells were then labeled with antibodies to caveolin-1 (A, C, and G) or to the HA tag (B, D, E, F, and H). A–D and G and H show cells double labeled for caveolin-1 and the epitope-tagged caveolin-3. In day 1 myoblasts (A and B), the endogenous caveolin-1 and expressed caveolin-3 colocalize (arrows). At later times (C and D), some cells clearly show a different labeling pattern for the two caveolin proteins (arrows indicate comparable regions of the two cells which are labeled for caveolin-3 but not caveolin-1). After fusion of myoblasts to form myotubes, the epitope-tagged caveolin-3 is present within the T-tubule system which runs thoughout the cell (E, F, and inset, arrows). G and H show cells double labeled for caveolin-1 and epitope-tagged caveolin3. While caveolin-1 labeling is present in neighboring, undifferentiated myoblasts, labeling is very low in the multinucleate myotube (arrows). Weak staining represented background labeling as shown by peptide inhibition (not shown). n, nucleus. Bars: (A–D) 2 μm (same magnification); (E–H) 5 μm (E and F, G and H, same magnifications). The T-tubule system of mammalian cells is an extensive membranous system which penetrates the entire muscle fiber but is continuous with the muscle plasma membrane. The protein and lipid composition of the T-tubule system is distinct from that of the sarcolemma. How this system develops and maintains its unique composition is a fundamental problem in cell biology. In the present study we have shown that caveolin-3, a member of a family of integral membrane proteins proposed to be involved in organizing membrane form and composition, is associated with precursor T-tubule elements in skeletal muscle. Our studies of caveolin-3 and caveolin-1 in C2C12 cells, primary cultured myotubes, and developing muscle in vivo suggest the following series of events. As myoblasts fuse to form myotubes, expression of caveolin-1 starts to decrease, and expression of the muscle-specific protein, caveolin-3, is dramatically increased. Caveolin-3 then appears in association with tubular elements which penetrate the entire cytoplasm of the myotubes but are predominantly orientated longitudinally along the length of the muscle fiber. These tubules are labeled by antibodies to the α1 subunit of the dihydropyridine receptor. By electron microscopy, the caveolin-3–labeled structures resemble elaborate, regular arrays of interconnected clusters of caveolae and are connected to the cell surface. These elements therefore have the characteristics of precursor T-tubules (Flucher et al., 1992). Around the time of birth, the level of caveolin-3 associated with the T-tubules starts to decrease, and in adult muscle, caveolin-3 is no longer detectable within the T-tubule system but is highly concentrated in sarcolemmal caveolae. The association of caveolin-3 with T-tubules is therefore restricted to a precursor T-tubule stage in which the forming T-tubules appear to consist of interconnected caveolae-like elements. These results raise the intriguing possibility that caveolae, and in particular caveolin-3, are involved in the biogenesis of the T-tubule system during muscle differentiation. Early electron microscopic studies were the first to suggest a role for caveolae, identified purely on morphological grounds, in early T-tubule formation in cultured myotubes (Ezerman and Ishikawa, 1967; Ishikawa, 1968). These studies suggested that caveolae form repeatedly but keep their connectivity with the surface, leading to the generation of extensive, regular arrays of interconnected caveolae-like structures. The resulting reticulum was proposed to represent the precursor T-tubule system based on its connectivity with the cell surface and its junctions with the sarcoplasmic reticulum. Later studies showed that these structures contained markers characteristic of the T-tubule system but excluded sarcolemmal components (Flucher et al., 1993). In vivo studies also confirmed the association of caveolae-like structures with developing T-tubules during embryonic muscle development (Franzini-Armstrong, 1991). Moreover, in regenerating muscle fibers, the reforming T-tubule system was shown to be composed of caveolar elements often having a “honeycomb” appearance (Miike et al., 1984). Taken together, these studies strongly argue for a role for caveolae-like structures at an early stage in T-tubule formation, but their identification as caveolae was based on morphology alone. The findings of the present study clearly show that these elements are indeed caveolae, as demonstrated by labeling with the caveolin-3 antibody. The reticular precursor T-tubule elements described in the above studies are apparently identical to the structures labeled with antibodies to caveolin-3 in C2C12 cells in the present study (e.g., compare Figs. 8 and 9 with Figs. 2 or 10 of Ishikawa, 1968) and show for the first time that a caveolin protein is associated with the developing T-tubule system. We have also shown that at least some of these structures are accessible to and labeled by a cholera toxin conjugate administered in the extracellular medium. Our results are therefore consistent with the proposed endocytic model for T-tubule formation (Flucher, 1992). A second model for T-tubule formation invokes a role for exocytic, rather than endocytic transport. In this model, the precursor T-tubule system is formed as a result of exocytic transport and is not initially connected to the plasma membrane. Recent studies provided evidence for a combination of these models, as some T-tubule precursor elements were shown to be discontinuous with the cell surface at early stages of development (Flucher et al., 1991). Further work will be required to ascertain how this compartment relates to the caveolin-labeled elements described here. At present we cannot rule out that newly formed caveolae can fuse with each other to form clusters which subsequently fuse with the cell surface. In many respects, the caveolae clusters shown here resemble the caveolin-1 positive caveolae in many nonmuscle cells. Caveolae in many cell types form extensive arrays of interconnected structures which penetrate the cytoplasm (Parton, 1996). This is particularly evident after okadaic acid treatment, when caveolae form large clusters which appear to be pulled into the center of the cell in an actin- and microtubule-dependent manner (Parton et al., 1994). These structures show a remarkable resemblance to the caveolin3 positive caveolae clusters seen in the present study (Figs. 8 and 10). Taken together, these studies suggest that caveolae have a propensity to form such structures, and in muscle cells, these structures form the basis for the development of the T-tubule system. In view of the known characteristics of caveolins, the postulated role of caveolae in T-tubule formation, and the transient detection of caveolin-3 within the developing T-tubule system, we speculate that caveolin-3 might be required to generate the unique protein and lipid composition of the T-tubule system. We have previously shown that caveolin-1 expression in caveolae-deficient cells causes de novo formation of caveolae (Fra et al., 1995b). The high density of caveolin-1 in the caveolar membrane as well as the need for a threshold level of caveolin-1 in the plasma membrane to produce caveolae (Parton, 1996), both argue for a structural role of caveolin in caveolae formation. Two recently described properties of caveolin-1 might be important in caveolar-domain formation. First, caveolin-1 self-associates to form oligomers (Monier et al., 1995; Sargiacomo et al., 1995). Second, caveolin is a cholesterolbinding protein (Murata et al., 1995). Cholesterol is essential for caveolae form and function (Rothberg et al., 1990, 1992), and it has been proposed that the interaction of caveolin with cholesterol in glycosphingolipid-enriched domains may be necessary for caveolae formation (Parton and Simons, 1995). Caveolin-3 appears to show a similar propensity to form oligomers (Tang et al., 1996) and shows high sequence homology in that region of the caveolin molecule postulated to be involved in oligomer formation (Sargiacomo et al., 1995). The region of the molecule involved in cholesterol binding is still unknown, but the three known caveolins have particularly high homology in the intramembrane and membrane proximal regions (Way and Parton, 1995; Tang et al., 1996). Thus, we speculate that caveolins may be general modulators of the plasma membrane, being able to generate the unique protein and lipid composition of caveolae, or, in the case of caveolin-3, the precursor T-tubule domain. Intriguingly, T-tubules, like caveolae, are known to be enriched in cholesterol (Hidalgo et al., 1983; Horgan and Kuypers, 1987), and this has even been used as a marker in fractionation studies (Knudson and Campbell, 1989). In addition, in the present study we have shown that the precursor T-tubule system is labeled by cholera toxin conjugates, which are concentrated in caveolae of other cells (Parton, 1994). While further work will be required to ascertain whether the receptor for cholera toxin, the ganglioside GM1, is actually concentrated within T-tubules, a logical extension of this model for T-tubule formation is that the protein and lipid composition of the T-tubule system may be maintained by principles similar to those of caveolae. Both caveolae and T-tubules represent membrane systems which are continuous with the plasma membrane but have a distinct composition. Caveolin-3 might be involved in the initial process of generating the T-tubule domain, and then cytoskeletal elements would assume the role of maintaining this structure in its precise alignment with each side of the Z-lines. Perhaps caveolae at the neck of the T-tubules in mature muscle (Franzini-Armstrong et al., 1975; Zampighi et al., 1975) act as barriers to prevent lipids and proteins of the sarcolemma and T-tubules from intermixing, as previously hypothesized (Flucher, 1992). In view of the importance of lipid-based sorting mechanisms in a number of different aspects of cellular organization (Simons and Ikonen, 1996), the hypothesis that the distinct sarcolemmal and T-tubule compositions may be generated using principles similar to those of caveolae, clearly warrants further attention. In mature skeletal muscle, caveolin-3 is restricted to sarcolemmal caveolae and is no longer detectable in the T-tubule system. A decrease in internal labeling was already apparent around birth. From birth onwards, the T-tubules are gradually reorganized from longitudinally orientated tubules to the regularly spaced radially orientated T-tubules characteristic of mature muscle (Franzini-Armstrong, 1991). Our observations of developing mouse muscle, primary mouse muscle cultures, and C2C12 cells suggest that the association of caveolin-3 with the T-tubule system is restricted to the predominantly longitudinal precursor T-tubules. This longitudinal arrangement has been shown to persist for many days in cultured C2C12 cells and in primary cultures but appears to be short-lived in vivo (Flucher, 1992), fitting well with the results of the present study. As the T-tubules reorganize, caveolin-3 may be removed from the T-tubules, by some unknown recycling mechanism, or newly synthesized protein may be directed away from the T-tubule system to the sarcolemma. Alternatively, the epitope recognized by the antibody, at the NH2 terminus, may be masked. A precedent exists for the latter, as intracellular labeling for caveolin-1 is not detectable within the trans-Golgi network of fibroblasts with antibodies to the NH2 terminus of the protein, but only with antibodies to the COOH terminus (Dupree et al., 1993). However, as the morphological features characteristic of caveolae are not detectable within the T-tubules of skeletal muscle, we favor the view that caveolin-3 is not present in the mature T-tubule system but only during its development. The fact that caveolin-3 is present in T-tubules during development but not in the final differentiated state, when it is restricted to sarcolemmal caveolae, suggests that caveolin-3 might have two distinct functions: as a morphogenetic element involved in forming the T-tubule domain, and as a component of sarcolemmal caveolae. The role in sarcolemmal caveolae is presumably similar to that of caveolin-1 in other cells, that is, maintenance of caveolar form and signal transduction. The functional interaction of caveolin-3 with trimeric G protein α subunits is consistent with a role in signaling in muscle cells similar to that of caveolin-1 in nonmuscle cells (Tang et al., 1996). It should be noted in addition that caveolin-3 is also present in smooth muscle cells which do not have a T-tubule system (Song et al., 1996). This again indicates that the function of caveolin-3 is not restricted to T-tubule formation. Caveolin-1 was not detectable in differentiated C2C12 cells nor in adult muscle tissue by immunofluorescence, although endothelial capillaries were heavily labeled (Fig. 12). However, undifferentiated myoblasts express caveolin-1, and we show here that expression of caveolin-3 in these cells results in initial colocalization and then segregation. Future studies should establish the signals involved in the sorting of the two proteins to different cellular compartments. While caveolin-3 associates with the T-tubules, the level of caveolin-1 decreases, suggesting that caveolin-3 replaces caveolin-1 as the major caveolin of differentiated muscle cells. The absence of caveolin-1 in differentiated cultured muscle cells is consistent with two previous studies (Munoz et al., 1996; Tang et al., 1996). However, despite the absence of caveolin in differentiated cultured cells, Munoz et al. (1996) detected caveolin-1 within muscle tissue by Western blotting. This apparent discrepancy may be explained by the abundance of caveolin-1 in endothelial cells in the muscle tissue. In conclusion, we speculate that caveolin-3 plays a role in T-tubule formation analogous to that of caveolin-1 in caveolae formation, and that caveolae and T-tubules may represent different manifestations of lipid-based sorting phenomenon. In the final mature muscle, caveolin-3 has an additional role in sarcolemmal caveolae where, as in other cells, it is presumably involved in interactions with signaling molecules. These studies raise the intriguing possibility that T-tubules and caveolae may use similar principles to generate and maintain their form and composition. The authors would like to thank Brigitte Joggerst (European Molecular Biology Laboratory, Heidelberg, Germany) and Colin MacQueen (VTHRC, University of Queensland) for excellent technical assistance. We are indebted to Drs. Melissa Little, Jenny Stow, and David James for providing reagents and to Dr. Peter Noakes for advice regarding primary cultures. We are particularly grateful to Drs. David James and Kai Simons for enlightening discussions and for their comments on the manuscript. This work was supported by grants to R.G. Parton from the National Health and Medical Research Council of Australia and from the Human Frontiers Science Foundation. (1993) Potocytosis of small molecules and ions by caveolae. Trends Cell Biol 3:69–72, pmid:14731772. (1995) Regulation of the acetylcholine receptor epsilon subunit gene by recombinant ARIA: an in vitromodel for transsynaptic gene regulation. Neuron 14:329–339, pmid:7857642. (1975) The relative contributions of the folds and caveolae to the surface membrane of frog skeletal muscle fibres at different sarcomere lengths. J Physiol (Lond) 250:513–539, pmid:1080806. (1993) Caveolae and sorting in the trans-Golgi-network of epithelial cells. EMBO (Eur Mol Biol Organ) J 12:1597–1605, pmid:8385608. (1967) Differentiation of the sarcoplasmic reticulum and T-system in developing chick skeletal muscle in vitro. J Cell Biol 35:405–420. (1992) Coordinated development of myofibrils, sarcoplasmic reticulum and transverse tubules in normal and dysgenic mouse skeletal muscle in vivo and in vitro. Dev Biol 150:266–280, pmid:1551475. (1993) Development of the excitation-contraction coupling apparatus in skeletal muscle: association of sarcoplasmic reticulum and transverse tubules with myofibrils. Dev Biol 160:135–147, pmid:8224530. (1994) Detergent- insoluble glycolipid microdomains in lymphocytes in the absence of caveolae. J Biol Chem 269:30745–30748, pmid:7982998. (1995a) A photo-reactive derivative of ganglioside GM1 specifically cross-links VIP21caveolin on the cell surface. FEBS Lett 375:11–14, pmid:7498456. (1995b) De novo formation of caveolae in lymphocytes by expression of VIP21-caveolin. Proc Natl Acad Sci USA 92:8655–8659, pmid:7567992. (1991) Simultaneous maturation of transverse tubules and sarcoplasmic reticulum during muscle differentiation in the mouse. Dev Biol 146:353–362, pmid:1864461. (1975) Size and shape of transverse tubule openings in frog twitch muscle fibers. J Cell Biol 64:493–497, pmid:1078824. (1993) Calcium pump of the plasma membrane is localized in caveolae. J Cell Biol 120:1147–1157, pmid:8382206. (1992) Localization of inositol 1,4,5-triphosphate receptor-like protein in plasmalemmal caveolae. J Cell Biol 119:1507–1513, pmid:1334960. (1978) Inpocketings of the plasma membrane (caveolae) in the rat myocardium. J Ultrastruct Res 65:135–147, pmid:731782. (1978) Effect of stretch and contraction on caveolae of smooth muscle cells. Cell Tissue Res 190:255–271, pmid:679259. (1986) Specific binding sites for albumin restricted to plasmalemmal vesicles of continuous capillary endothelium: receptor-mediated transcytosis. J Cell Biol 102:1304–1311, pmid:3007533. (1995) Glycolipid-anchored proteins in neuroblastoma cells form detergent-resistant complexes without caveolin. J Cell Biol 129:619–627, pmid:7537273. Griffiths, G. 1993. Fine Structure Immunocytochemistry. Springer-Verlag, Berlin/Heidelberg. 459 pp. (1983) Characterization of the Ca2+- or Mg2+-ATPase of transverse tubule membranes isolated from rabbit skeletal muscle. J Biol Chem 258:13937–13945, pmid:6139374. (1987) Isolation of transverse tubules by fractionation of sarcoplasmic reticulum preparations in ion-free sucrose density gradients. Arch Biochem Biophys 253:377–387, pmid:2952065. (1968) Formation of elaborate networks of T-system tubules in cultured skeletal muscle with special reference to the T-system formation. J Cell Biol 38:51–66, pmid:5691978. (1989) Albumin is a major protein component of transverse tubule vesicles isolated from skeletal muscle. J Biol Chem 264:10795–10798, pmid:2732247. (1994) VIP21-Caveolin, a protein of the trans-Golgi network and caveolae. FEBS Lett 346:88–91, pmid:8206165. (1995) Evidence for a regulated interaction between heterotrimeric G proteins and caveolin. J Biol Chem 270:15693–15701, pmid:7797570. (1994) Caveolae, caveolin and caveolin-rich membrane domains: a signaling hypothesis. Trends Cell Biol 4:231–235, pmid:14731661. (1994) Cloning and subcellular localization of novel rab proteins reveals polarized and cell type-specific expression. J Cell Sci 107:3437–3448, pmid:7706395. (1984) Behaviour of sarcotubular system formation in experimentally induced regeneration of muscle fibers. J Neurol Sci 65:193–200, pmid:6481398. (1995) VIP21-caveolin, a membrane protein constituent of the caveolar coat, forms high molecular mass oligomers in vivo and in vitro. Mol Biol Cell 6:911–927, pmid:7579702. (1982) Non-coated membrane invaginations are involved in binding and internalization of cholera and tetanus toxins. Nature (Lond) 296:651–653, pmid:7070509. (1989) The α1 and α2 polypeptides of the dihydropyridine-sensitive calcium channel differ in developmental expression and tissue distribution. Neuron 2:1499–1506, pmid:2560646. (1996) Isolation and characterization of distinct domains of sarcolemma and T-tubules from rat skeletal muscle. J Biol Chem 271:8133–8139, pmid:8626501. (1995) VIP21-caveolin is a cholesterol-binding protein. Proc Natl Acad Sci USA 92:10339–10343, pmid:7479780. (1994) Ultrastructural localization of gangliosides; GM1 is concentrated in caveolae. J Histochem Cytochem 42:155–166, pmid:8288861. (1996) Caveolae and caveolins. Curr Opinion Cell Biol 8:542–548, pmid:8791446. (1995) Digging into caveolae. Science (Wash DC) 269:1398–1399, pmid:7660120. (1989) Meeting of the apical and basolateral endocytic pathways of the Madin-Darby canine kidney cell in late endosomes. J Cell Biol 109:3259–3272, pmid:2557351. (1994) Regulated internalization of caveolae. J Cell Biol 127:1199–1215, pmid:7962085. (1974) Conceptual model of the excitation contraction coupling in smooth muscle: the possible role of the surface microvesicles. Stud Biophys 44:141–153. (1976) The sarcolemma of Aplysia smooth muscle in freeze-fracture preparations. Tissue Cell 8:248–258, pmid:941133. (1990) Cholesterol controls the clustering of the glycophospholipid-anchored membrane receptor for 5-methyltetrahydrofolate. J Cell Biol 111:2931–2938, pmid:2148564. (1992) Caveolin, a protein component of caveolae membrane coats. Cell 68:673–682, pmid:1739974. (1995) Oligomeric structure of caveolin: implications for caveolae membrane organization. Proc Natl Acad Sci USA 92:9407–9411, pmid:7568142. (1996) Identification, sequence, and expression of caveolin-2 defines a caveolin gene family. Proc Natl Acad Sci USA 93:131–135, pmid:8552590. (1977) T-system formation in cultured rat skeletal tissue. Tissue Cell 9:437–446, pmid:929575. (1994) Filipin-sensitive caveolaemediated transport in endothelium: reduced transcytosis, scavenger endocytosis, and capillary permeability of select macromolecules. J Cell Biol 127:1217–1232, pmid:7525606. Simons, K., and E. Ikonen. 1996. Sphingolipid-cholesterol rafts in membrane trafficking and signalling. Nature (Lond.). In press. (1996) Expression of caveolin-3 in skeletal, cardiac and smooth muscle cells. J Biol Chem 271:15160–15165, pmid:8663016. (1996) Molecular cloning of caveolin-3, a novel member of the caveolin gene family expressed predominantly in muscle. J Biol Chem 271:2255–2261, pmid:8567687. (1995) M-caveolin, a muscle-specific caveolin-related protein. FEBS Lett 376:108–112, pmid:8521953. (1975) On the connections between the transverse tubules and the plasma membrane in frog semitendinosus skeletal muscle: are caveolae the mouths of the t-tubule system. J Cell Biol 64:734–740, pmid:1080153.
2019-04-23T23:50:54Z
http://jcb.rupress.org/content/136/1/137?ijkey=0897f3b0ea975f59f01b43c85ad264f0459e2ce6&keytype2=tf_ipsecsha
Greenleaf G., “The Global development of free access to legal information”, in European Journal of Law and Technology, Vol. 1, Issue 1, 2010. Cite as Greenleaf G., “The Global development of free access to legal information”, in European Journal of Law and Technology, Vol. 1, Issue 1, 2010. (i) They publish legal information from more than one source (not just 'their own' information), for free access via the Internet, and (ii) they collaborate with each other through membership of the 'Free Access to Law Movement'. Most, but not all, share three other characteristics: technical networks for back-up security purposes; most are independent of government (though this is diminishing as a distinguishing feature); the majority use one of two open source search engines, the Sino search engine developed by AustLII (previously shared with other LIIs, and open source since 2006), and the Lucene search engine utilised by LexUM in the development of various LIIs. 'Legal information institute' (or 'LII'), as used here, therefore refers to a sub-set of the providers of free access to law, namely those from across the world, who have decided to collaborate both politically and technically. Taken together, the LIIs are the most coordinated, and among the largest, providers of free access to legal information, but they are far from alone in providing free access to legal information. This chapter is not about 'free access to law' per se, but focuses on a particular grouping of providers of free access to legal information, while discussing the more general context of 'free access to law' in which they operate. Three LIIs played key roles in early developments: the Legal Information Institute (Cornell), AustLII, and LexUM. They each developed from research projects on various aspects of legal automation going back to the 1980s, and were ready to capitalise on the worldwide web's sudden emergence into public prominence around 1994. LexUM at the University of Montreal commenced in 1993, with a Law Gopher server (then via the Public Law Research Center), and created the first Canadian legal site and the first legal site available in French, as well as carrying out many research and consultancy projects. During the 1990s it built various Canadian law sites including the Judgements of the Supreme Court of Canada website. In 2000 LexUM built the Canadian Legal Information Institute (CanLII), which quickly became a very large national LII comprehensively covering Canada's federal system, matching AustLII in size and usage. LexUM initially used the Sino search engine, and then adopted the open-source Lucene search engine and other development tools. CanLII's databases now include decisions of Canadian superior courts and a broad range of administrative tribunals (more than 120 databases), with historical scope typically back to around 2000 but sometimes considerably earlier (to 1985 for Supreme Court decisions). It also publishes historical and up-to-date versions of legislation from all but one of the 14 Canadian jurisdictions. It has a bilingual (English-French) user interface. CanLII innovations include the Reflex citator. This provides for each decision on CanLII a 'RefLex record' listing related decisions, 'note-ups' (decisions citing the decision), and legislation and decisions cited. From 2000 AustLII started to use its search engine (Sino ) and other software to assist organisations in other countries, initially limited to those with academic roots, to establish LIIs with similar functionality. AustLII helped to establish between 2000-04 servers and databases for five LIIs (BAILII, PacLII, HKLII, SAFLII and NZLII ). It operated the servers from Sydney for a period on behalf of its local partners, with progressive local take-over of operations. All use AustLII's Sino search engine. Responsibility for obtaining and developing legal data was usually undertaken by the local partner from the outset. The British & Irish Legal Information Institute (BAILII), formed in 2000, is based at the Institute of Advanced Legal Studies, London and operated by the BAILII Trust. BAILII includes almost 80 databases covering 6 jurisdictions (United Kingdom, England and Wales, Scotland, Northern Ireland, Ireland and some European Court decisions), including case law, legislation and law reform reports from all the jurisdictions it covers. Back capture of cases and law reform documents through its Open Law Project gives it considerable historical depth. The Hong Kong Legal Information Institute (HKLII) has been operated since 2002 by the University of Hong Kong with 12 databases of the law of the Hong Kong Special Administrative Region (SAR). It is a bilingual system and has developed its own search engine for the Chinese content. An innovation is its joint operation of the Community Legal Information Centre (CLIC), a bilingual community legal information web site with extensive links to HKLII. HKLII or LawPhil in the Philippines were the first LII in Asia. The New Zealand Legal Information Institute (NZLII), based at University of Otego's Faculty of Law since 2004, now has 30 databases covering almost all significant New Zealand Courts and Tribunals, bilateral treaties, law reform reports, and four law journals. It involved many years' effort to obtain content for free access. The final element, legislation, was added in 2008, making its coverage near comprehensive of current law. In addition, CyLaw in Cyprus was established in 2002 by a local lawyer using AustLII's Sino search engine and contains all judgments issued by the Supreme Court of Cyprus since 1997 (in Greek) and other databases, but has been independently operated from inception. All of the systems AustLII has assisted are now operated with independent local control and resources, and this is the major reason for their success. AustLII's aim of assisting partners to achieve full local take-over, as quickly as possible, has been effective, with only the server of NZLII (the most recently-formed) still being operated by AustLII. Having established CanLII, LexUM used the tools it had developed to create, with local partners; Droit Francophone (2003), JuriBurkina (2003) and JuriNiger (2007). Droit Francophone is discussed later. JuriBurkina is the judicial information centre of Burkina Faso, launched in 2004 and operated by the Burkina Faso Bar Association with LexUM's assistance. It provides over 1,000 decisions up to 2007 from eight of the country's courts and tribunals. JuriNiger provides nearly 2000 decisions of five courts to 2007. It is developed by LexUM in conjunction with the Ordre des Avocats du Niger, and operated from the LexUM servers. The Free Access to Law Movement (FALM), established in 2002, is a loose affiliation of 26 Legal Information Institutes, as of, 2009. The group of LIIs associated with the LII (Cornell), LexUM and AustLII made the initial attempts to establish collaboration and organisation, to further free access to law globally, but FALM has become a broader grouping since then. The 'Law via Internet' Conferences have been the principal means by which this co-operation was established. The first was hosted by AustLII in 1997, as were the 2nd (1999), 3rd (2001) and 5th (2003). LexUM/CanLII hosted the 4th (2002), French organisations (as FrLII ) hosted the 6th (2004), PacLII the 7th (2006), LexUM/CanLII the 8 th (2007) and the Istituto di Teoria e Tecniche dell'Informazione Giuridica (ITTIG) in Florence the 9th (2008). SAFLII will host the 10th Conference in 2009. Many of the conference papers are available online and comprise a considerable resource on legal information systems. The Free Access to Law Movement (FALM) meets annually during the Conference, and operates by email between conferences. The first sustained attempt to build some form of international network took place at Cornell in July 2000, involving participants from the US, Canada, Australia, the UK and South Africa. The expression 'WorldLII' was first used there to describe a collaborative LII portal. The FALM was then formed at the 2002 Conference in Montreal, and adopted the Declaration on Free Access to Law (see Appendix 2 for text). The Declaration has had some amendments since then. Membership is by invitation, with members nominating new candidates, and consensus is required. The membership criteria are not fixed but involve adherence to, and support of, the Declaration and activities similar to (but not necessarily identical with) a LII. At the 2007 meeting initial steps were taken to turn the 'Movement' into a more formally constituted 'Association' (FALA), but these have not yet proceeded further. The membership of FALM has expanded beyond the initial members discussed above, and the four portals discussed below, to include other national LIIs from Argentina, France, Ireland, Italy, Germany, Mexico, The Philippines, Spain and Thailand. The 26 members are listed in Appendix 1. The principal aim of the FALM, re-affirmed at its 2007 meeting, is the provision of assistance by its members to organisations who wish to provide free access to law in countries where that has not yet occurred. This has been successful, as outlined above. It also provides mutual support to organisations already providing free access to law who wish to join the FALM. The Declaration recognises 'the primary role of local initiatives in free access publishing of their own national legal information'. A second aim stated in the Declaration is that 'All legal information institutes are encouraged to participate in regional or global free access to law networks.' As the Declaration puts it, the aim is 'To cooperate in order to achieve these goals and, in particular, to assist organisations in developing countries to achieve these goals, recognising the reciprocal advantages that all obtain from access to each other's law.' The main activities of the FALM, in light of these aims, have been sharing of software, technical expertise and experience on policy questions such as privacy issues. The Declaration encourages LIIs to 'participate in regional or global free access to law networks.' Before 2002 there were some national and regional LIIs, but no multi-LII networks. BAILII and PacLII were multicounty, regional systems from inception (and SAFLII became one), but did not involve material from other LIIs. The World Legal Information Institute (WorldLII), launched in 2002. It was the first multi-LII site, initially providing search accesses to the databases from AustLII, BAILII, PacLII, HKLII and CanLII, and from South Africa (before SAFLII was formed).The Free Access to Law Movement adopted it as their joint portal in 2002. It has three main aspects: as a portal making multiple LIIs simultaneously searchable; its own databases; and its catalogue and web search facilities. WorldLII is organized primarily by country, providing from the page for each country, in the world, as many complementary legal research facilities (databases, catalogue, and web search) as possible. WorldLII's networking of multiple LIIs makes it the largest, free access, legal research facility on the Internet because it makes the databases provided by the other collaborating LIIs simultaneously searchable. By 2009 the WorldLII comprised nearly 800 databases, from over 100 countries, in all continents. Databases from the LIIs that cooperate most closely with AustLII are the principal source of the databases searchable via WorldLII, mainly because the use of a common search engine (AustLII's Sino) makes technical cooperation easier to achieve. The databases from 40 countries of the Global Legal Information Network (GLIN) (discussed below) are another significant searchable resource. WorldLII also includes over 700,000 US Circuit Court of Appeals cases republished from public US sources, and access to the US Code provided by the LII (Cornell). Databases from Droit Francophone are not at present available (see below), and the continuing availability of CanLII's databases is unresolved. WorldLII's own databases are primarily 22 databases of decisions of international Courts and Tribunals in the International Courts and Tribunal Library (the largest such searchable collection available via the Internet), and some databases in the Privacy Law Library. A new element of WorldLII in 2009 was the creation of 'virtual databases' for each country in the world, drawing on: law journal articles; treaties; international court decisions and other globally-relevant content available through WorldLII to create country specific databases. The WorldLII Catalogue is the largest law specific catalogue on the Internet, with links to over 15,000 law related websites, (concerning every country and most international institutions), and a subject index. It is one of the few global law catalogues still being maintained (though only minimally at present) in the face of the popularity of search engines but it is biased toward English language content. The web search facility uses AustLII's web spider to make searchable the full texts of as many sites as possible in the Catalogue, but its scope and interface is at present inferior to commercial search engines. WorldLII (and CommonLII and AsianLII discussed below) also provide a 'Law on Google' facility for each country, which translates a search in WorldLII's Sino syntax into an effective search over Google. However, the search is limited to material from the country concerned and limited to legal content. This facility may be generalised to other search engines in future. WorldLII is not yet a global legal information service. It provides a primarily English language interface, and its databases are primarily in English, but with some content in other languages. The collaborating LIIs that provide its databases are drawn mainly from the Pacific, Asia, Australasia, Africa, the USA and South America: apart from the UK and Ireland, its European coverage is as yet slight. LawCite, a free access global citatory for cases and other legal materials is the most recent development related to WorldLII. It is based largely on collaboration between the same group of LIIs, using citatory software developed by AustLII which uses heuristics to recognise references to over 15,000 Law Report and journal series. It was released for public access in December 2008, and now provides citation records for almost three million cases and some journal articles. The records are updated daily. The Global Legal Information Network (GLIN), operated by the US Library of Congress since at least 2001, is a database primarily of official texts of legislation, but also including treaties and, for some countries, judicial decisions and other complementary legal sources. They are contributed by governmental agencies and international organisations, which provide to GLIN the full texts of their published documents, to the database, in their original languages. GLIN's member countries are predominantly from Latin America but include quite a few other countries (e.g. Romania, South Korea and Spain). Each document is accompanied by a summary in English and, in many cases in additional languages, plus subject terms selected from the multilingual index to GLIN, prepared by Library of Congress Staff. Over 150,000 items have been contributed and all summaries are available to the public, and public access to full texts is also available for 25 of the 40 jurisdictions covered by GLIN. Searching is the only access mechanism, but allows results to be sorted by relevance, by date or by jurisdiction. The translations of summaries of legislation, in English and other languages, are probably the main value of GLIN, at least to an English speaking audience. In 2007 the GLIN databases of abstracts were added to WorldLII's search scope (and a facility to browse by country or year was added), and GLIN became a FALM member. This gave WorldLII a South and Central American dimension previously lacking, as well as additional, legislative databases from countries in Asia, the Middle-East and Europe. A linguistic focus to the creation of a multicounty LII was taken in 2003 by LexUM's development of Droit Francophone, the French language legal portal of the Organisation Internationale de la Francophonie (OIF). It is 'multi-LII' because it includes JuriBurkina content. Its databases of over 4000 texts include legislation from 21 countries from across the whole Francophonie, and case law from 10 countries. A Web based interface allows for the remote, decentralised management of its content by representatives from each of the national structures in charge of access to law, who meet annually sponsored by OIF. It is now being reorganised by OIF. Droit Francophone also provides a catalogue of more than 4000 legal websites concerning law of the Francophonie that are evaluated and commented on, and a Web search engine indexing those websites. In 2005 AustLII developed the Commonwealth Legal Information Institute (CommonLII) covering Commonwealth and Common Law countries. It was in some respects an English-language response to LexUM ('droit Anglophone' is its nickname). CommonLII relies principally upon the content of existing LIIs (AustLII, BAILII, CyLaw, CanLII, PacLII, HKLII, NZLII, SAFLII and ZamLII), but also added over 50 databases from 20 additional countries, which do not yet have their own LIIs (mainly in South Asia and the Caribbean). The South Asian databases provide nearly 200,000 cases and part of CommonLII's purpose is to encourage new LII development in these countries and regions. A major addition in 2008 was the 125,000 cases from the English Reports, the basis of the common law world-wide. CommonLII is supported by a range of Commonwealth Institutions, including the Commonwealth Law Ministers Meeting, the Commonwealth Secretariat Legal and Constitutional Division, the Institute of Advanced Legal Studies, Financial support for CommonLII has been primarily from Australian sources to date, but the Commonwealth Secretariat is now funding a Commonwealth wide Criminal Law Library on CommonLII, using virtual database techniques. The Asian Legal Information Institute (AsianLII) developed by AustLII in 2006, drew on CommonLII's content (for 8 Asian Commonwealth countries), PacLII (for Papua New Guinea) and HKLII (for Hong Kong), and is therefore a multi-LII network. However, most of its content comprises databases from 18 additional Asian countries which do not yet have local LIIs. AsianLII provides over 200 databases from 27 of these 28 countries in ASEAN, Myanmar excepted (Afghanistan to Japan; Mongolia to Timor-Leste). It also includes databases from regional organisations such as APEC, the Asian Development Bank (ADB) and the International Development Law Organisation (IDLO). A principal aim of AsianLII, and the reason it has AusAID funding in relation to ten developing countries, is to assist development of new local LIIs, some of which are likely to emerge from AsianLII's 'Country Supporting Institutions' in these countries. AsianLII is supported by many of the regional law organisations (including LAWASIA, the Inter-Pacific Bar Association, APEC, ADB and IDLO), with funding from Australian sources including AusAID. The development of CommonLII and AsianLII also significantly expanded the content searchable via WorldLII. Cooperation between the thirteen LIIs and FALM members that collaborate in the provision of WorldLII has resulted in their joint provision of nearly 900 databases from over 100 countries, searchable from one location. WorldLII is best seen as the largest portal to this collaborative network, but is only one, of a number, of such portals - regional, linguistic/political, translation based, and potentially others from different perspectives. The number of databases provided by all of the LIIs of the Free Access to Law Movement has been growing rapidly ever since 2002. While the databases from many of the countries are quite small, they are very substantial from others. In Canada, Australia, Hong Kong, India, Papua New Guinea, the Philippines, Indonesia, South Africa, Ireland, the UK, New Zealand and many Pacific Island countries, what the LIIs offer is very substantial and includes content not available from commercial legal publishers. Furthermore, WorldLII, as the global portal of the LIIs, compares well with its two commercial counterparts (the international portals of LexisNexis and Westlaw) in terms of scope of countries covered, though not necessarily in depth for individual countries. The LII networks provided through WorldLII, CommonLII and AsianLII utilise a replication / synchronisation model. A copy of all LII data is held in Sydney by AustLII, replicated daily using RSYNC. Searches over the locally stored concordances at AustLII produce rapid search results, and users are then returned to the databases on the originating LII when they choose to access a particular search result. The PacLII mirror at AustLII is the one seen by users outside the Pacific, due to slow access speeds to the Vanuatu server. Some LII content is also mirrored at other LIIs in the network. An issue currently under discussion is that CanLII prefers a federated search model (with searches sent to cooperating systems) rather than a replication/synchronization model, but AustLII considers that federated search cannot be operated with fast enough access speeds or useful relevance ranking. 6 Beyond the LIIs : How global is the Free Access to Law Movement? The membership of the Free Access to Law Movement has to date been drawn primarily from LIIs based in academic institutions. However, recent members have included GLIN (US Library of Congress), SAFLII (now based at the South African Constitutional Court, and operated by its Trust), the Kenya Law Reports (a semi-governmental body) and the Thai Law Reform Commission. The key condition for government based members in the Declaration (as amended in 2007) is that they 'Do not impede others from obtaining public legal information from its sources and publishing it'. In other words, a government body cannot be a member if it provides free access to law in a way that monopolises the publication of that information or supports such monopoly publication. The key test is whether republication of government information is allowed. Freedom to republish official sources is at the heart of the Free Access to Law Movement, and essential for the operation of LIIs. Examples of multisource, free access, government provided, national, legal information systems include Legifrance (France), FINLEX (Finland), Jersey Legal Information Board (Jersey), InfoLeg (Argentina), Albanian Official Publications Centre (Albania) and BelgiumLex (Belgian government portal). Perhaps the most outstanding example, EUR-Lex, comes from a regional organisation, the European Union. The few examples in Asia include LawNet Sri Lanka and Mongolia's Legal Unified Information System. None of them are members of the FALM yet, nor have they yet been invited to join. It is not certain that all could do so, as their positions on the question of not impeding republication of government information may vary, and some may also have difficulty in becoming members of a non-government organisation. Nevertheless, it is clear that there is far more extensive free access to law than is provided by the current members of the Free Access to Law Movement. As at the end of 2008, the Free Access to Law Movement only included a minority of the organisations who could be its members, and whose involvement could make it more significant both politically and technically. The most obvious field for expansion of membership is in those government providers of free access to law from multiple sources, who also meet the republication criteria, as discussed above. Other possible non-government members, not yet invited to join, may come: from University based free access providers of primary materials for example AltLaw (Columbia University and the University of Colorado law schools); from some repositories of legal scholarship (for example, bePress Legal Repository ); and from developers of new collaborative forms of legal scholarship such as Wikipedia (which has extensive law content ) or (if it develops) JurisPedia. FALM membership is slowly expanding, and in 2008 its new members were Juridicas (UNAM, Autonomous University of Mexico), the Thai Law Reform Commission, IIjusticia (Argentina), Droit.org (France), Jersey Legal Information Board, Ugandan Legal Information Institute (ULII) and the Institute of Law and Technology (Autonomous University of Barcelona, Spain). The geographical scope of FALM membership is nevertheless far more limited than the spread of free access to law as an idea and a reality, being concentrated on the Anglophone and Commonwealth countries, the francophonie, and parts of Asia. While Africa is well covered (from both the Anglophone and Francophone directions), Latin America, the Middle East, most of Europe and the states of the USA are not yet involved. This is a challenge for a movement which is potentially global, but also indicates that the FALM and the development of LIIs may yet be far from reaching its maximum impact. One future direction for the LII networks, and the FALM, is to provide a global alternative to the expanding global reach of the current legal publishing duopoly. In helping to provide and sustain better access to law in many countries, the FALM can encourage organisations in those countries to join in a global project that supports economic progress, the rule of law and democracy. There are some disagreements between those who advocate free access to law about what is the best strategy for long-term success. Most FALM members would be likely to reject Jon Bing's argument in favour of state run legal information services that only provide a limited amount of free access. I have described it as a 'statist model' and likely to fail because it is based around monopolies over legal information. Tom Bruce of the LII (Cornell) has also been pessimistic about the long-term role of LIIs in providing free access to law, arguing for a radically decentralised model where the courts and legislatures will publish everything themselves, for free, and according to standards. This argument fails to show that third party republication is doomed, or unnecessary, only that publication at source is good. As yet, the future of LIIs may not be certain, but has not been disproved either. Another difference of opinion, although not as well articulated, is over the value to the diffusion of free access to law of creating regional or linguistic multi-country LIIs, where there may not be direct local participation from all of the countries covered, or at least not initially. Must all initiatives be 'bottom up' to be valuable, or can 'top down' initiatives sometimes result in engaging local participation, with the eventual result of decentralization and new LIIs? Or might this stultify local initiatives? AustLII's approach, particularly with AsianLII, has been an explicitly 'top down' approach (it included databases from 27 of 28 countries and territories from inception), but with an equally explicit goal of engaging 'bottom up' local LII development. Both approaches are agreed on the value of maximum decentralisation to local LIIs: it is a question of how many ways you can get there. Different preferences in models of LII networking, between a replication / synchronisation model and a federated search model, have previously been discussed. The main constraining factor of the non-government LIIs is funding: free to use, but not free to build. Every LII looks after the funding of its own system. The models on which LIIs are funded vary a great deal. AustLII has a 'multi-contributor' model, with nearly 200 institutional contributors, plus individual contributors (mainly lawyers). BAILII is similar in having multiple contributors, though fewer. The LII (Cornell) annually solicits funds from the public. Most LIIs have had a considerable deal of academic funding and academic institutional support (including HKLII, PacLII, AustLII, LawPhil and BAILII). CanLII is funded primarily by the Canadian legal profession: every Canadian lawyer provides over C$20 per year via their professional associations. Other LIIs have not been able to replicate this. International aid and development agencies have made significant contributions to the development costs of PacLII, SAFLII, Droit Francophone, AsianLII and WorldLII and strategic alliances with some legal publishers have helped AustLII. A small LII like CyLaw is a personal project. NZLII still lives on 'the smell of an oily rag' (a NZ expression) and help from other LIIs, while it searches for longer-term funds, as does CommonLII. Kenya Law Reports is trying to move from a model combining government funding with subscription income to one which does without subscriptions for its online resources. There is no single source likely to fund global free access to law in the long-term, but that doesn't mean it can't be done. It has been done with ever widening scope for over a decade. There is not one formula, but as with many other aspects of open content, there are many non-business models by which numerous stakeholders can be engaged. There are as yet few government based FALM members, but government based 'LIIs' face different funding challenges; GLIN is unusual in having obtained sustained government funding. While there are many individual courts and legislatures who publish their own output for free access (often from their own budgets), there are relatively few governments who fund multisource free access national legal information systems (the usage of 'LII' in this article), and they are mainly from Europe and some in Latin America (examples are given above). In many developing countries, there are no funds available for development of online legislation or case law unless it is provided by international aid agencies such as the World Bank, Asian Development Bank, CIDA or AusAID. In recent years the World Bank has funded major free access systems in Sri Lanka and Mongolia (mentioned above). The sustainability of these free access facilities, particularly in terms of updating data, often becomes problematic once the initial aid funding ceases. Where this happens, engagement with the FALM members, and the assistance they can provide, may be valuable. In the past, aid and development agencies have often invested considerable funds into national legal information systems, without requiring that free access systems be developed, and sometimes requiring to the contrary that they adopt 'pay for use' models in the hope they will become self funding. The FALM and its members need to help convince aid and development agencies that free access models can be more sustainable, and socially beneficial, in developing countries than closed 'pay for use' models. 4.Academic exchange of research results. To provide to the end users of public legal information clear information concerning any conditions of re-use of that information, where this is feasible. Professor of Law, Faculty of Law, University of New South Wales and Co-Director, Australasian Legal Information Institute (AustLII), email graham@austlii.edu.au. Some parts of this Chapter were previously published on the GlobaLex website. Helpful comments have been received from Andrew Mowbray, Philip Chung, Pierre-Paul Lemyre, Joe Ury, Kerry Anderson, Kevin Pun, Abdul Paliwala, Martin Backes and Jill Matthews (who also assisted with editing), but responsibility for content remains with the author. For a summation of these ideals, see D. Poulin, 'Open access to law in developing countries' First Monday vol. 9, no 12, 6 December 2004; an early statement is G. Greenleaf, A. Mowbray, G. King and P. van Dijk, 'Public access to law via internet: the Australasian Legal Information Institute', Journal of Law & Information Science, 1995, Vol 6, Issue 1. http://www.law.cornell.edu/wex/index.php/Main_Page (visited 15 April 2009); Wex is developed in part from the LII (Cornell)'s previous 'Law About …' series. G. Greenleaf, A. Mowbray and P. van Dijk, 'Representing and using legal knowledge in integrated decision support systems -DataLex WorkStations', Artificial Intelligence and Law, (1995) vol 3, nos 1-2, pp 97-124; and see AustLII AustLII Publications with links to over 50 publications since 1992 including DataLex Project publications. See http://www.lexum.umontreal.ca/publication.epl?lang=en for extensive LexUM publications. D. Poulin 'CanLII - How the Bar and Academia can make free access to the Law a reality', Proc. 3rd Law via the Internet 2001, University of Technology, Sydney, Australia, 2001; Poulin D, Salvas B and Pelletier F, 'La diffusion du droit canadien sur Internet', 102 R. du N. 189, 2000. As at 15 April 2009 the site is not accessible, and a notice on the site states 'Dear Internet, The International Organization of la Francophonie is doing maintenance work on its portal Droit Francophone. For this reason, the site will be unavailable during this period of work, but our teams are working for a rapid return to normal operation. We thank in advance for your understanding and your loyalty' (Translation from French by Google). Not available at the time of writing due to technical issues. 79 G. Greenleaf, P. Chung and A. Mowbray 'Challenges in improving access to Asian laws: the Asian Legal Information Institute (Asian LII)' UNSWLRS 42 (on bepress), in Proceedings of the 4th Asian Law Institute Conference, Jakarta, May 2007 at http://law.bepress.com/unswwps/flrps/art42/ (visited 15 April 2009); G. Greenleaf 'Free access to Japanese and Asian law - The launch of AsianLII in Japan' UNSWLRS 60 (on bepress), presentation at the Launch of the Asian Legal Information Institute in Japan, 4 August 2007, Meiji University, Kanda, Tokyo, at http://law.bepress.com/unswwps/flrps/art60/ (visited 15 April 2009). AustLII, BAILII, CyLaw, CanLII, GLIN, LawPhil, LII (Cornell), PacLII, HKLII, NZLII, Thai Law Reform Commission, SAFLII and ZamLII. J. Bing 'The policies of legal information services: A perspective of three decades' in L. Bygrave (ed), Yulex 2003, (Oslo: Institutt for rettsinformatik / Norwegian Research Centre for Computers and Law 2003), pp 37-55.
2019-04-19T17:31:28Z
http://ejlt.org/article/view/17/39
Home Comfort Heating & Air Conditioning provides HVAC services in Los Angeles County and the surrounding area; it asserted rights in marks “wholly or partially comprised of the word elements ‘HOME COMFORT,’ ” including “HOME COMFORT SERVICES,” and “HOME COMFORT HEATING AND AIR CONDITIONING.” It had some registrations. Ken Starr Inc. subsequently began operating an HVAC business under the name “Home Comfort USA” in Southern California, including Los Angeles County and the surrounding area. Notable findings: The court found strong evidence of confusion from negative online reviews, including one negative Yelp review, and oral complaints with regard to products and services that were supplied by KSI. In the prior year, Home Comfort received five attempts by customers seeking to return products purchased from KCI, two refund requests arising out of KCI’s services, thirty inquiries about KCI’s special pricing offers, and forty inquiries about available services from customers who saw KCI’s ads. A few of the voicemails were generic requests for quotes, but the majority reference specific accounts, appointments, and issues with prior work done by KCI. The district court applied the 9th Circuit’s screwed-up “can you determine the product just from knowing the mark?” test to determine that the marks were suggestive. Twenty-six HVAC businesses around the nation that use some combination of the terms “home” and/or “comfort” in connection with their services didn’t diminish the strength of the mark, especially without more evidence of use and given that “HVAC customers will, necessarily, seek a local source for these products and services.” Of the three businesses that did serve California, each used other distinguishing words as well: “Stephan’s Home Comfort Services,” “Engineered Comfort,” and “US Comfort.” Nor did Home Comfort’s addition of “Heating & Air Conditioning” to its marks, KCI’s slogan “Call the Comfort Guys, We’re There!” or the parties’ different colors deminish the likely confusion. The word marks “Home Comfort” and “Home Comfort USA” were essentially indistinguishable and adding generic words didn’t create a meaningful distinction “from the perspective of a consumer.” The stylized marks included “Home Comfort” as their dominant portion and had a depiction of a house and appeared substantially similar; the colors and the slogan weren’t how consumers would make a primary identification. Purchaser care was neutral because almost everyone needs HVAC services, making the target market average, but they tend to be expensive, increasing consumer care. Given that the factors favored a finding of likely confusion, was there irreparable harm? The rule is that “[e]vidence of loss of control over business reputation and damage to goodwill could constitute irreparable harm,” and the court found that the actual confusion shown here satisfied that standard. However, delay works against a finding of irreparable harm, and Home Comfort delayed 20 months after discovery of the problem to seek relief. The court found that the delay was sufficiently explained by Home Comfort’s oppositions to KCI’s trademark registration applications and extensive settlement discussions. In addition, the dates of the voicemails indicated that confusion was increasing over time, making delay less probative. With that, the other requirements for injunctive relief were easily satisfied. Painter alleged that Blue Diamond mislabeled its almond beverages as “almond milk” when they should be labeled “imitation milk” because they substitute for and resemble dairy milk but are nutritionally inferior to it. The court of appeals affirmed the district court’s finding of FDCA preemption. “The FDCA sets forth the bare requirement that foods imitating other foods bear a label with ‘the word “imitation” and, immediately thereafter, the name of the food imitated.’” Painter’s argument that Blue Diamond needed either a nutritional comparison of almond milk to dairy milk or cease using the term “milk” on the label of its almond milk products thus conflicted with the FDCA. Sunny Delight has never sold a product labeled as the ones [depicted in] the First Amended Complaint are labeled. Sunny Delight product labels have always included more information about the products than shown in [the FAC]. The images in [the FAC] that Plaintiffs claim are the true labels they read and relied on are actually just images from an old version of Sunny Delight’s website. Those were stylized images used on the website only. They lack numerous details on the actual labels because people looking at the website have trouble reading all the things that are on the actual labels given the size of the images. Sunny Delight has never sold any products with the labels reflected in [the FAC]. The court found that plaintiffs and their counsel had knowingly made false factual contentions in the first amended complaint, including the allegation that the embedded image was a “true and accurate representation” of the labels, as well as numerous allegations that likewise incorrectly described those labels in ways that are central to the claims in the litigation. “These falsehoods were not the product of reasonable mistake and were not mere inaccuracies that would ‘likely have [had] evidentiary support after a reasonable opportunity for further investigation or discovery.’” Plaintiffs’ counsel acknowledged at the hearing that they based the complaint on Sunny Delight’s website, not on the labels on the actual products. “That approach might have been acceptable had Plaintiffs purchased the Products based on a website image, but they did not…. It is apparent that Plaintiffs’ counsel did not undertake the most fundamental of investigations— namely, examining the actual Product labels—before filing the First Amended Complaint.” Counsel had the opportunity to acknowledge the problem when Sunny Delight identified it, but they compounded it instead, arguing that the court should take the allegations as true. This wasted the court’s and Sunny Delight’s time and resources, and was sanctionable under Rule 11. The court struck the first amended complaint rather than parsing the allegations for any that were salveageable. However, the court wouldn’t strike the complaint with prejudice. Plaintiffs argued that at least some of their claims still had merit, and the court didn’t rule on whether the presence of a front disclaimer would preclude the claims as a matter of law. The court was also wary of confusing attorney honesty with the merits; sanctions address only the former. Nonetheless, to serve Rule 11’s deterrent purposes, the court also awarded reasonable attorneys’ fees for preparing this motion and the second motion to dismiss, where the misrepresented labels formed the core of the dispute and were the primary basis for the court’s ruling. Fees for general casework, or for the first motion to dismiss—which was primarily about jurisdiction—were not included. The breach of express warranty claims were reinstated for the same reasons. PS: Since In re GNC purported to interpret California law, can we now defer to the 9th Circuit to say that the case isn't even right in the 4th Circuit? I know, it would be better for a California state court to point this out--I can hope, though. The question presented in Rimini Street v. Oracle is whether the Copyright Act's allowance of "full costs" is limited to the categories and amounts of costs enumerated in 28 U.S.C. 1920 & 1821, or whether it refers to all litigation expenses. Because Congress and the Supreme Court have stated that the word "costs" is a term of art, the question turns on whether the word "full" -- as the Ninth Circuit held -- can cause "costs" to lose its technical meaning. This brief, filed on behalf of eleven corpus linguistics scholars, presents empirical evidence derived from corpora -- electronically searchable databases of texts -- that shows that it cannot. The meaning of adjectives is determined by the nouns they modify, not the other way around. That is why we judge a "tall seven year old" by a different standard of tallness than a "tall NBA player" and why the word "long" means one thing when modifying "story" and something else entirely when modifying "table." Furthermore, the linguistic evidence shows that "full" in Section 505 should be considered a "delexicalized" adjective -- meaning its purpose is to draw attention to and underline an attribute that is already fundamental to and embedded in the nature of the noun. "Full" often serves to emphasize the completeness of an object that is already presumed to be complete, like "full deck of cards," "full set of teeth," and "full costs." The court dismissed the claim; the challenged practices/statements were nonactionable opinion and puffery. Quoting McCarthy: “Under both the Lanham Act and the Constitutional free speech clause, statements of opinion about commercial matters cannot constitute false advertising ....” Also true of GBL §349. an informational directory of attorneys, which consumers can consult whether or not they intend to hire an attorney. And the complained-of website features simply provide information; they might be considered in making, but do not themselves propose, a commercial transaction. Moreover, that sponsored advertisements appear on the defendant’s website does not morph the website’s noncommercial features into commercial speech. So that put the profiles outside of the Lanham Act anyway. The defendant’s rating system is inherently subjective. The defendant chooses the inputs for its system and decides how to weigh them. … A reasonable consumer would view an Avvo rating as just that – the defendant’s evaluation. What factors the defendant believes to be important in assessing attorneys, and the result of the defendant’s weighing of those factors, cannot be proven false. Third, the “Pro” badge appearing on the profile pictures of attorneys who pay Avvo is intended to convey a statement of fact: that an attorney has verified the attorney’s information as it appears on Avvo. Avvo’s website explains this meaning with an “i” icon next to the “Pro” badge. Hovering over the “I” discloses that “Attorneys that are labeled PRO have verified their information as it appears on Avvo,” and the website eventually explains the “Avvo Pro” subscription plan if you follow enough links elsewhere. Thus, the statement wasn’t false. Davis alleged that it was still misleading because it implied higher quality, and the disclosures weren’t sufficiently conspicuous to avoid that implication. The court agreed with Avvo what this was puffery. “Pro” means, literally, a professional; that was true [though why that’s relevant to misleadingness, especially when others in the profession were not granted the dignity of that characterization if they didn’t pay, is unclear]. To the extent that consumers perceived it as “conveying that an attorney is especially experienced or skilled, the term is mere puffery.” Davis couldn’t prove that lawyers marked “Pro” were undeserving, “because in context the term has no definite meaning or defining factors.” Allegations about advertising “highly qualified,” “the right,” or the “best” attorneys failed for the same reasons, as did allegations that paying lawyers got enhanced visibility on the website. Finally, Davis did not sufficiently allege injury by offering facts that demonstrate a causal connection between his injury some misrepresentation made by Avvo. Conclusorily alleging lost fees and reputational damages, and diverted business, was insufficient absent facts indicating that consumers on the allegedly misleading Avvo ratings, pro badges, client reviews, or other statements “in choosing or gauging the reputation of an attorney.” “The only fact the plaintiff pleaded to support his theory of harm is that the defendant’s website holds a prominent presence on the internet, and thus consumers who perform a Google search with phrases like ‘top litigation attorney’ will see the website on the first page of results.” That wasn’t enough. H/T C.E. Petit. This comedy of errors might (might!) be ending. The court of appeals affirmed the district court’s judgment in favor of Hargis and its denial of Hargis’s motion for fees and costs. In B&B’s trademark infringement action against Hargis in May 2000, a jury found that B&B’s “Sealtight” mark was not entitled to protection because it lacked secondary meaning. We affirmed. In June 2006, B&B filed for incontestability status for its trademark with the Patent and Trademark Office (PTO). The PTO issued a Notice of Acknowledgment in September 2006, concluding that B&B’s affidavit of incontestability met the statutory requirements. The jury found that Hargis infringed on B&B’s trademark but did not do so willfully, awarded B&B none of Hargis’s profits, and found for Hargis on its counterclaims and its affirmative defense of fraud. Based on the jury’s fraud finding, the district court found that “Sealtight” was not entitled to incontestability status, and that B&B therefore had not pled an intervening change in circumstances allowing it to relitigate claims raised inthe 2000 jury trial. The district court therefore entered judgment for Hargis on all claims. B&B appealed, arguing that the jury verdict finding fraud and a lack of willfulness was clearly erroneous; and that the district court abused its discretion in refusing to disgorge Hargis of its profits. The court of appeals found no plain error. Incontestability requires an applicant to file an affidavit with the PTO declaring that “there has been no final decision adverse to [his] claim of ownership of such mark . . . or to [his] right to register the same or to keep the same on the register . . . .” “At least one circuit treats a district court’s finding of mere descriptiveness at summary judgment as such an adverse decision.” [And I don’t see how one could conclude otherwise, since descriptiveness means that the symbol is not a mark and thus can’t be owned as a mark. Failure to disclose is important since the PTO doesn’t examine §15 affidavits on the merits as long as it’s facially complete. And the affidavit is “especially important because a defendant accused of infringing an incontestable trademark may raise an affirmative defense that ‘the registration or the incontestable right to use the mark was obtained fraudulently.’” Fraud on the PTO “consists of willfully withholding material information that, if disclosed, would result in an unfavorable outcome.” Here, materiality means information that a reasonable examiner would have considered important. Warning: bad argument alert, not fully called out by the court of appeals. B&B argued that the 2000 verdict wasn’t a final adverse decision. The court of appeals responded that, in 2007, the TTAB explicitly stated that the 2000 jury verdict was an adverse decision that extinguished B&B’s common-law rights in the “Sealtight” name, so there was no plain error in the district court so finding. B&B then argued that its deception wasn’t willful because it didn’t realize the jury verdict was a final adverse decision and that it didn’t disclose that verdict based on the advice of counsel. The jury was entitled to disbelieve B&B’s owner’s testimony on this point. 15 U.S.C. § 1065 specifies that the affidavit has to include statements that “(1) there has been no final decision adverse to the owner’s claim of ownership of such mark for such goods or services, or to the owner’s right to register the same or to keep the same on the register; and (2) there is no proceeding involving said rights pending in the United States Patent and Trademark Office or in a court and not finally disposed of.” B&B’s predicate to its defense, that “finality” was what mattered, is thus fatally flawed. Of course, it is in theory possible that its counsel was so incompetent as not to understand this very clear provision of law, especially since courts of appeal apparently feel no need to mention it, but the true requirements for an incontestable registration might lend even more plausibility to the jury’s conclusion. Hargis also wanted its fees, and I sympathize (we haven’t even talked about the other facts B&B played fast & loose with, no pun intended), given that it’s been fighting this ridiculous case for decades. Despite the fraud finding, the court concluded that “[t]his case does not present an example of groundless, unreasonable, or vexatious litigation, as it has arguable merit on both sides—evidenced by the fact that both parties have prevailed at various times throughout its 12-year history. We cannot say that B&B pursued litigation in bad faith, as it received a favorable Supreme Court ruling and reasonably believed it could prevail.” This conclusion demonstrates the importance of selecting a starting point. I would have started instead with B&B’s decisions to go to the PTO seeking a workaround to the failure of the first case, to fail to disclose that material adverse result to the PTO, and to deliberately leverage that wrongly granted incontestability as the sole reason to relitigate the whole case. I would have thought that taking a matter to the Supreme Court on a premise that itself was based in fraud was “exceptional.” It’s probably also true that Hargis could have disposed of the matter earlier had its attorneys been unusually attentive to the actual requirements of incontestability and had the district court also understood incontestability, but as between the parties I would attribute the responsibility to B&B. I don’t know why this took so long to show up in my searches, but: this is a consumer protection class action arising from Sturm’s ill-fated decision to put instant (which it labeled “soluble”) coffee into pods that fit into Keurig coffeemakers, to get a jump on the competition for nicer ground coffee pods once the pod patent expired. This lawsuit was filed in 2011; the district court dismissed it on the theory that consumers should have known that “soluble” meant “instant,” and the court of appeals reinstated it, after which a class was certified on liability. Sturm didn’t take my unasked-for advice of that last post; instead, it seems determined to litigate to the bitter end, no pun intended. Plaintiffs brought claims under the consumer protection laws of Alabama, California, Illinois, New Jersey, New York, North Carolina, South Carolina, and Tennessee. This opinion details the court’s trial plan dealing with key elements of the claims. Sturm waited seven years to raise a FDCA preemption argument and did so in a few sentences; nope. The court also emits a bit of impatience with Sturm’s re-raising of previously rejected arguments. More generally, case law about class certification establishes whether certain issues can be resolved with class-wide evidence at trial. And the fact that the Seventh Circuit said that “[e]very consumer fraud case involves individual elements of reliance or causation” in its earlier opinion in this case does not mean that class-wide proof is impermissible to establish reliance or causation under any and all circumstances. The parties agreed that individual proof was needed to show causation under the statutes of Tennessee and South Carolina, but not on the other laws. California’s CLRA: “When the consumer shows the complained-of misrepresentation would have been material to any reasonable person, he or she has carried the burden of showing actual reliance and causation of injury for each member of the class. As some courts have put it, the plaintiff may establish causation as to each by showing materiality as to all.” Unless “the record will not permit” that inference, as when a named plaintiff testifies that she didn’t have the posited reaction to the claim or where it was “likely that many class members were never exposed to the allegedly misleading advertisement.” Thus, inferences of causation, reliance, and injury arise under the CLRA “where plaintiffs can establish that the defendants made a uniform and material misrepresentation or omission to the entire class.” The UCL “is much more straightforward” and doesn’t require individualized proof of deception, reliance and injury. New Jersey has applied a presumption of causation where a misrepresentation was material, in writing, and uniformly made to each plaintiff, and also where “all the representations about the product [were] baseless.” The application of a presumption of causation also may depend on whether plaintiffs could have known the truth behind the alleged fraud (why that is relevant is not clear to me) and whether plaintiffs reacted to information about the product in a similar manner. North Carolina requires a showing of proximate causation, which itself requires a demonstration of actual and reasonable reliance. “While this inquiry may be difficult to conduct on a class-wide basis, the Supreme Court of North Carolina has held that circumstantial evidence may be sufficient for a factfinder to infer reliance.” For example, a material misrepresentation that went to the sole point of the product could justify a class-wide finding of causation, as could a sufficiently material misrepresentation uniformly made to the class. The court concluded that the target consumers, Keurig owners, faced a “more-or-less one-dimensional decision making process” when they purchased the accused product. They hoped to buy single-servings of premium, ground coffee they could brew in their Keurig machines. “There is no other logical explanation as to why consumers would purchase instant coffee, at a premium price, in a K-Cup, that they had to brew.” It doesn’t make sense to buy a product three or four times more expensive than typical instant coffee to use a specialized machine to heat water for instant coffee. “This is simply not a case where the plaintiffs had a number of reasons for purchasing” the product. Ultimately, though a jury would determine deceptiveness and proximate causation of injury, it could do so on a class-wide basis, without individual inquiries. The court thus indicated its intent to subclass based on the initial and modified packages, further divided by state law. The first planned jury trial would be bifurcated. The first part would assess whether defendants committed a deceptive act or omission that would be materially misleading to a reasonable consumer. If the jury so found, the court would then decide whether those acts were materially deceptive under North Carolina law (which treats whether acts are unfair or deceptive as a question of law) and California law (because UCL and FAL claims are equitable in nature). The jury would then answer the same question for the remaining state subclasses. If the jury said yes, it would proceed to answer questions about whether the conduct occurred in the course of trade or commerce [that one seems a gimme]; whether the conduct affected the public interest; whether the non-Tennessee/South Carolina/California classes suffered injuries/damages/ascertainable losses in reliance on, or as a proximate cause of, the deception; and whether defendants intended for that last group to rely on their deceptive acts or omissions. A second jury trial would then determine the remaining elements of ascertainable loss, proximate cause, and damages for the Tennessee and South Carolina subclasses. PURPOSE: The Project on the Foundations of Private Law is an interdisciplinary research program at Harvard Law School dedicated to scholarly research in private law. Applicants should be aspiring academics with a primary interest in intellectual property (especially, patent, copyright, trademark and trade secret) and its connection to one or more of property, contracts, torts, commercial law, unjust enrichment, restitution, equity, and remedies. The Project welcomes applicants with a serious interest in legal structures and institutions, and welcomes a variety of perspectives, including economics, history, philosophy, and comparative law. The Qualcomm Postdoctoral Fellowship in Private Law and Intellectual Property is a specifically designed to identify, cultivate, and promote promising IP scholars early in their careers. Fellows are selected from among recent graduates, young academics, and mid-career practitioners who are committed to spending one or two years at the Project pursuing publishable research that is likely to make a significant contribution to the IP and private law, broadly conceived. More information on the Center can be found at: http://www.law.harvard.edu/programs/about/privatelaw/index.html. PROGRAM: The Qualcomm Postdoctoral Fellowship in Private Law and Intellectual Property is a full-time, one- or two-year residential appointment, starting in the Fall of 2019. Like other postdoctoral fellows, IP Fellows devote their full time to scholarly activities in furtherance of their individual research agendas in intellectual property and private law. The Project does not impose teaching obligations on fellows, although fellows may teach a seminar on the subject of their research in the Spring of their second year. In addition to pursuing their research and writing, fellows are expected to attend and participate in research workshops on private law, and other events designated by the Project. Fellows are also expected to help plan and execute a small number of events during their fellowship, and to present their research in at least one of a variety of forums, including academic seminars, speaker panels, or conferences. Through organizing events with outside speakers, helping to run programs, and attending seminars, fellows interact with a broad range of leading scholars in intellectual property and private law. The Project also relies on fellows to provide opportunities for interested students to consult with them about their areas of research, and to directly mentor its Student Fellows. Finally, fellows will be expected to blog periodically (about twice per month) on our collaborative blog, New Private Law (blogs.harvard.edu/nplblog). STIPEND AND BENEFITS: Fellows have access to a wide range of resources offered by Harvard University. The Center provides each fellow with office space, library access, and a standard package of benefits for employee postdoctoral fellows at the Law School. The annual stipend will be $55,000 per year. ELIGIBILITY: By the start of the fellowship term, applicants must hold a J.D. or other graduate law degree. The Center particularly encourages applications from those who intend to pursue careers as tenure-track law professors in intellectual property and private law, but will consider any applicant who demonstrates an interest and ability to produce outstanding scholarship in the area. Applicants will be evaluated by the quality and probable significance of their research proposals, and by their record of academic and professional achievement. 2. PDFs of transcripts from all post-secondary schools attended. 3. A Research Proposal of no more than 2,000 words describing the applicant’s area of research and writing plans. Research proposals should demonstrate that the applicant has an interesting and original idea about a research topic that is sufficiently promising to develop further. 4. A writing sample that demonstrates the applicant’s writing and analytical abilities and ability to generate interesting, original ideas. This can be a draft rather than a publication. Applicants who already have publications may also submit PDF copies of up to two additional published writings. 5. Three letters of recommendation, emailed directly from the recommender. Letter writers should be asked to comment not only on the applicant’s writing and analytical ability, but on his or her ability to generate new ideas and his or her commitment to pursue an intellectual enterprise in intellectual property and private law. To the extent feasible, letter writers should provide not just qualitative assessments but also ordinal rankings. For example, rather than just saying a candidate is “great,” it would be useful to have a statement about whether the candidate is (the best, in the top three, among the top 10%, etc.) among some defined set of persons (students they have taught, people they have worked with, etc.). All application materials with the exception of letters of recommendation should be e-mailed by the applicant to conner@law.harvard.edu. Letters of Recommendation should be emailed directly from the recommender to the same address. For questions or additional information, contact: Bradford Conner, Coordinator, conner@law.harvard.edu. Plaintiff Emson sued defendants Masterpan and S&E for false advertising. The parties compete to sell pots and pans. Emson’s Gotham Steel pots and pans are made of aluminum and have a copper-colored, non-stick ceramic and titanium coating; it uses direct response TV commercials and as “As Seen On TV” logo on its packages and other ads. S&E and Masterpan sell “The Original Copper Pan” which allegedly deceives the public by falsely and deceptively conveying to consumers that its cookware is the first of its kind and that Emson’s (and other’s) products are not the originals but are instead mere imitations. In addition, defendants allegedly falsely advertised certain versions of the OCP as being made of, and not merely coated with, copper. “Although each pan has a copper-colored cooking surface, Emson alleges that it ran tests on samples of the 12-inch OCP,” and found that “the cores of each of the tested Original Copper Pans had undetectable levels of copper” and that the inner coating on the samples also lacked the presence of copper. Finally, defendants allegedly “use an ‘As Seen On TV logo in their advertising,” without having advertised on TV, or only minimally doing so. Defendant S&E, however, fared better. Emson alleged sufficient facts to plausibly conclude that Masterpan markets and sells the OCP, as noted above and by providing documentary evidence that Masterpan shares directors with Dreambiz, Ltd., which owns the trademark “The Original Copper Pan.” But there was nothing so specific as to S&E, only allegations that it shared an address with Masterpan. Plaintiffs bought Cheez-It crackers that were labeled “whole grain” or “made with whole grain.” They alleged violation of New York and California consumer protection laws because such labeling would cause a reasonable consumer to believe that the grain in whole grain Cheez-Its was predominantly whole grain, when, in fact, it was primarily enriched white flour. The district court held that the whole grain labels would not mislead a reasonable consumer, and the court of appeals (in some tension with its recentholding on Trader Joe’s truffle-flavored oil) reversed. The challenged packages used “WHOLE GRAIN” in large print in the center of the front panel of the box, and “MADE WITH 5G OF WHOLE GRAIN PER SERVING” in small print on the bottom or “MADE WITH WHOLE GRAIN” in large print in the center of the box, with “MADE WITH 8G OF WHOLE GRAIN PER SERVING” in small print on the bottom. Both packages also contained a “Nutrition Facts” panel on the side of the box, which stated in much smaller print that a serving size of the snack was 29 grams and that the first ingredient on the ingredients list (in order of predominance, as required by federal law) was “enriched white flour.” “Whole wheat flour” was either the second or third ingredient. False advertising or deceptive business practices under New York or California law requires that the deceptive conduct was “likely to mislead a reasonable consumer acting reasonably under the circumstances.” Context is crucial, including disclaimers and qualifying language. The district court reasoned that “a reasonable consumer would not be misled by a product’s packaging that states the exact amount of the ingredient in question.” But the packaging here allegedly implied that the product was “predominantly, if not entirely, whole grain,” and it wasn’t. This was plausibly misleading because they falsely imply that the grain content was entirely or at least predominantly whole grain. The ingredient list didn’t help, even though it indicated that a serving size of Cheez-Its was 29 grams and the list of ingredients names “enriched white flour” as the first (and thus predominant) ingredient. The serving size didn’t “adequately dispel the inference communicated by the front of the package that the grain in ‘whole grain’ crackers is predominantly whole grain because it does not tell what part of the 29-gram total weight is grain of any kind.” Plus, adopting the Ninth Circuit’s Williams rule, the court of appeals agreed that “reasonable consumers should [not] be expected to look beyond misleading representations on the front of the box to discover the truth from the ingredient list in small print on the side of the box.” The Nutrition Facts panel and ingredients list plausibly contradicted, rather than confirmed, the “whole grain” representations on the front of the box. Other cases dismissed on the pleadings involved plaintiffs who alleged deception because a product label misled consumers to believe, falsely, that the product contained a significant quantity of a particular ingredient. Here, however, the deceptiveness was the implication that, of the grain content in the product, most or all of it is whole grain, as opposed to less nutritious white flour. In addition, in most of the other cases, “plaintiffs alleged they were misled about the quantity of an ingredient that obviously was not the products’ primary ingredient.” No reasonable consumer would think that crackers “made with real vegetables” were made primarily with fresh vegetables. Here, “reasonable consumers are likely to understand that crackers are typically made predominantly of grain. They look to the bold assertions on the packaging to discern what type of grain.” Thus, the front of the package could have misled them. The court declined to adopt a rule that would allow any “made with X” advertising when the ingredient X was in fact present, no matter how deceptive (e.g., if the crackers here were 99.999% white flour). TN Warranty started as CompassOne Warranty in 2015, with a mark derived from an earlier company with a mark called Vector Compass. TN Warranty also argued that a non-party, Premium 2000+, was run by an ex-business partner turned rival of TN Warranty’s founder, who’s filed various lawsuits against that founder. Premium 2000+ allegedly offered to do business with TrueNorth, but cited TN Warranty’s name and mark as a “road block” to doing business, indicating TrueNorth’s lawsuit was premised more on Premium 2000+’s animosity towards the founder rather than on true confusion in the marketplace. TrueNorth disagreed, citing emails and phone calls from truck drivers and professionals within the trucking and insurance industries that allegedly demonstrated confusion. Preliminary injunction: though the Eighth Circuit has not yet ruled on the Lanham Act consequences of eBay and Winter, those cases lead to the conclusion that a presumption of irreparable harm upon showing likely success on the merits (via confusion) is not warranted. Harm to reputation can, however, be irreparable. TrueNorth argued that TN Warranty has received negative consumer reports from the Better Business Bureau and Trucker’s Report (an online forum used by truck drivers). Some of TrueNorth’s trucking industry partners contacted TrueNorth on behalf of drivers with warranty claims in an attempt to resolve warranty issues. It explained its delay in seeking a preliminary injunction stems with an increase in calls about warranties that it received in 2018. TrueNorth argued that it started recording calls in January 2018 due to the “increasing number of calls and other instances of confusion among TrueNorth customers.” It recorded six calls in February 2018, four calls in March 2018, one call in April 2018, three calls in May 2018, no calls in June 2018, two calls in late July 2018 and six calls in August 2018. Its witness described the harm as follows: “Just verbal communications that have been relayed to me that they think that the presence of having su[ch] a similar logo is creating challenges and confusion that is disruptive to our working together to market to owner operators and truck lessees.” The witness further described the situation as creating challenges with how TrueNorth tries to market to leasing companies, but could not provide any specific examples and was not aware of any specific loss of business with the company under discussion. Are there people who believe that Twiqbal improved consistency? Because I do not understand the level of detail required. Here, the magistrate holds that pleading that one’s testing complied with FDA regulations is not sufficient to plausibly plead that one’s testing complied with FDA regulations. I would have thought that, if it’s enough of a fact to be determined by a court and not trigger preemption, then it’s enough of a fact to be pled on its own, even if it is a potentially dispositive issue. But I don’t see non-advertising Twiqbal cases, so I might be overly critical. The plaintiffs sought to represent a class of Banana Boat “SPF 50” or “SPF 50+” product purchasers. They alleged that “rigorous scientific testing has revealed that the Products do not provide an SPF of 50, much less ‘50+’.” Consumer Reports magazine reported in May 2016 that “its own testing had revealed that Banana Boat Kids SPF 50 sunscreen lotion had an SPF of only 8.” Further, plaintiffs alleged that their own independent testing using FDA methods demonstrated the Products had SPFs lower than listed on the label. They brought various state law false avertising claims. The court rejected defendants’ primary jurisdiction argument. The FDA published a “sunscreen Final Rule” allegedly “mandating a whole host of highly specialized, highly scientific, and precise technical and scientific protocols that manufacturers must follow relating to testing and labeling.” Agency expertise is “the most common reason for applying the doctrine,” which is also used “to promote uniformity and consistency with the particular field of regulation.” Other cases have rejected applying the doctrine to sunscreen labeling, given that plaintiffs allegedly relied on long-established SPF testing procedures and standards, rendering their labels false and misleading, which is a routine factual question for courts. Defendants argued that the court would have to determine whether the parties’ tests followed the technical and scientific requirements of the sunscreen Final Rule. But “this Court is equipped to address such technical and scientific questions, as this and other courts routinely do on a regular basis.” Even if the FDA was in the “best” position to interpret the Final Rule, the court could do so too. In terms of uniformity and consistency, it was merely speculative that the FDA would be taking further action, much less formal action, or that any such action would be retroactive. Though the FDA had solicited bids for testing sunscreens over two years ago, there was no indication that further action was forthcoming. …. Plaintiffs conducted their own independent testing of the Products, utilizing the methodology for SPF testing mandated by the FDA. Specifically, the independent testing conducted by Plaintiffs was conducted in compliance with all FDA testing methods embodied in FDA Final Rule, 21 CFR Parts 201 and 310, (Federal Register/Vol 76, No 117/Friday, June 17, 2011/Rules and Regulations, including 21 CFR 201.327). The results of the independent testing conducted by Plaintiffs were consistent with the results suggested by Consumer Reports’ test results and confirmed that the Products had actual SPFs substantially lower than the claimed SPF 50 or “50+”. Plaintiffs’ investigation concluded that all three products, clearly labeled as containing SPF 50 or “50+”, contained an SPF of less than 37.8 and no more than a 30.1. This wasn’t sufficient (though plaintiffs said they were prepared to file an amended pleading). The complaint was 34 pages long and only 4 paragraphs were devoted to this crucial issue (this comparison strikes me as a bad measurement tool). Only one paragraph mentioned the specific methodology. There was a need for more than a “conclusory statement that the testing complied with the FDA Final Rule, an ultimate question this Court may be called upon to decide in the future.” And it was unclear whether plaintiffs had FDA-compliant test results relating to all three challenged products. Thus, the court found it prudent to allow an amended complaint. The court also commented that plaintiffs would likely have difficulty satisfying the predominance requirements on their nationwide claims, but declined to dismiss the class certification parts of the case at this time. Hi-Tech sued HBS, alleging that the label of its protein-powder supplement HexaPro misled customers about the quantity and quality of protein in each serving, in violation of the Georgia Uniform Deceptive Trade Practices Act and the Lanham Act. The district court dismissed the Georgia claims on FDCA preemption grounds and found that it wasn’t plausible that the label was misleading. The court of appeals affirmed the first conclusion, but reversed the second, and declined to find that the FDCA precluded Lanham Act claims here. The front of the label identifies the product as an “Ultra-Premium 6-Protein Blend” with “25 G[rams] Protein Per Serving,” and it touts the product’s “6 Ultra-High Quality Proteins” and “5 Amino Acid Blend with BCAAs [Branch-Chain Amino Acids].” The left side repeated “an Ultra-Premium, Ultra-Satisfying Blend of 6 High-Quality Proteins” and identified those six whole-protein sources, stating that the product “is also fortified with 5 Amino Acids to enhance recovery.” The right side features the nutrition-facts table, which states that HexaPro contains 25 grams of protein per serving, and the list of ingredients. This side also has a table labeled “Amino Acid Profile” whose heading indicates that HexaPro contains 44 grams of amino acids per serving, while the table itemizes only 25 grams. Hi-Tech alleged three kinds of deception. First, HexaPro contains free-form amino acids and other non-protein ingredients as well as whole proteins; an analysis that excludes these “spiking agents” and counts only “total bonded amino acids”—which alone are molecularly complete proteins—allegedly yields an “actual protein content” of “17.914 grams per serving,” not 25 grams per serving. However, the applicable FDA regulation permits “[p]rotein content [to] be calculated on the basis of the factor 6.25 times the nitrogen content of the food,” even if not all of a product’s nitrogen content derives from whole-protein sources. Given that, the complaint plausibly alleged that the label was misleading. “Considering the label as a whole and taking its statements in context, we find it plausible that a reasonable consumer would be misled to believe that a serving of HexaPro contains 25 grams of protein derived from the ‘6-Protein Blend’ comprising the ‘6 High-Quality Proteins’ listed on the label.” Even an additional prominent statement that the product contained an amino acid blend wasn’t enough to avoid this conclusion. The allegation was not that consumers would be misled to believe that the only ingredient is the “Ultra-Premium 6-Protein Blend.” Rather, Hi-Tech argued that the label would induce a reasonable consumer to believe that the protein in HexaPro derives exclusively from the six-protein blend, and this was at least plausible. The label doesn’t indicate that the claimed 25 grams came from any other source than the whole-protein ingredients; other than in the 25-gram claim, it never used the word “protein” to refer to anything other than the whole-protein ingredients, and instead consistently treated “amino acids” as separate from and providing distinct nutritional benefits from “protein.” The “Amino Acid Profile” on the right side of the label listed 25 grams of amino acids, but provided no explanation of how this figure related either to the product’s 25 grams of protein per serving or the 44 grams of amino acids per serving advertised at the top of the table. HBS’s specific arguments for preclusion also failed. HBS argued that application of the Lanham Act would create “a genuinely irreconcilable conflict” with the federal regulation governing protein calculations because it couldn’t simultaneosuly disclose both 25 grams of protein to satisfy the requirements of the FDA and 18 grams to satisfy Hi-Tech. But that wasn’t the only way to cure the misrepresentation. “[I]t would suffice to clarify on the HexaPro label how much protein in each serving derives from the six-protein blend and how much derives from free-form amino acids and other non-protein ingredients”; there was no federal law against that.
2019-04-20T03:16:20Z
http://tushnet.blogspot.com/2018/12/
A networked system for processing queries for a server in a distributed processing environment is provided. The system includes a plurality of clients disposed for communication with a database server through an electronic mail system. The server includes an electronic mail interface for receiving queries submitted by the clients, and transmitting the corresponding response. A processor is also provided for processing the queries submitted from the clients, and submitting the queries on to the scheduler. The processor operates to provide bi-directional communication between the mail interface and the scheduler. In addition, the processor retrieves mail messages from the mail interface, translates them into a format recognized by the server, receives query results from the server, and returns the results with the appropriate user identification to the mail interface. A scheduler, provided in connection with the server, provides automated scheduled execution of the mail processor in accordance with a set of programmed tasks. The present invention relates generally to networked systems, and more particularly, to the processing by a server of requests from a plurality of communicatively coupled computing machines in a network environment. Generally speaking, computer networks include a plurality of communicatively interconnected computing machines (e.g., terminals, micro-computers, mainframe, etc.). Networks seek to better utilize computer resources (e.g., memory, hard disks, printers, processors, files, programs, and processing capabilities) by enabling the constituent computing machines to share the computer resources. Sharing computer resources in a network enables a requesting computing machine, also referred to as a client, to submit a request for an operation to be performed on another networked computer, referred to as a server. Servers, include for example, database servers, file servers, and print servers that respond to requests by clients for the associated resources provided by the servers. The server processes the request and provides an appropriate response informing the requesting client of the results. Instances in which such an arrangement is particularly beneficial exist where a large database is utilized by a number of users or where a set of users require access to a same set of information within a dynamic database. In such cases, well known benefits are realized by the sharing, via a database server, of access by users to the information and the operations performed by the database server upon the information such as for example performing searches of the database in response to requests submitted by the networked users. In a typical client/server based network, a number of diverse clients are communicatively coupled to one or more servers in order to facilitate the submission of a variety of requests to the servers. A particular type of network server is a database server. Database servers maintain and manage a shared database in a network. By sharing the database, it is possible for the database server to maintain a single master copy of the database. Networked client computers send requests to the database server to add additional records to the database, remove records from the database, and update records in the database. In addition, the clients submit database queries to the database server concerning the information records stored in the database controlled by the database server. Even with a database server dedicated essentially to responding to database requests, when a number of users submit queries to the server in a very short time period or the database server is processing a very large, non-interruptable task, a system bottleneck arises. Consider for example a business accounting system having a centralized database maintained and managed by a database server communicatively coupled to a number of client machines set up for use by accounting personnel. Assume that the accounting database includes various income and expense accounts. Associated with each account are a number of transaction dates, amounts, comments, etc. A number of accounting personnel may submit substantially simultaneous query requests to the database server. The database server, in response to the simultaneous requests, allocates its processing resources to process the query requests as quickly as possible, while avoiding errors resulting from a query to a partially updated database. In general, there are various response time requirements for execution of database queries by a server. A client machine may submit a high priority query needing immediate attention. On the other hand, another client machine may submit a low priority request that may be responded to by the database server at a later time when higher priority queries are not pending. Continuing with the foregoing example of a database server maintaining account information, a user may submit a high priority request via a network client requiring immediate attention by the server such as, for example, a particular account balance requiring an expedited response from the server. The client maintains a connection to the server until the client receives a response to the request from the server. In other instances, a user may not need an immediate response. For example a user logged onto a network client may need a set of previous day's balances for a designated set of accounts at the beginning of the following business day. Such a request would typically be considered a low-priority request. In order to avoid tying up network resources such as database servers during high usage periods, known systems include means for delaying carrying out low priority requests from clients. In such systems, the user submits a low priority request, for example, during the previous business day. The request is processed in due course by the database server during a low-usage time (e.g. after business hours). The requesting client, rather than maintaining a connection to the server in order to receive an immediate response, typically terminates a network connection after transmitting the request. The requesting client receives the results at a later time after re-establishing a connection to a network entity containing the results of the request, or by receiving the results in printed form. In a known system, a client submits a request to a connected server and then disconnects before receiving a response from the server. After processing the request, the server transmits the results to an electronic mail (email) location designated in the request. Thereafter, the client retrieves the results of the requests from the electronic mail location. While submitting requests that direct the output to be mailed back to the client does, in effect, enable a client to "disconnect" between submitting a query and retrieving the results, it is inefficient from a design perspective, since it requires supporting, by the client machines, two separate and architecturally distinct interfaces to accomplish a single task. A client machine, in order to utilize such a system must support an on-line connection to the server for submitting a request and a separate email interface for reading the results. Furthermore, the presence of two different interfaces may require a user to learn how to use two separate software tools (e.g., an on-line server query tool to submit a request to the server, and another tool to interface the emailed results). Alternatively, additional integration software (e.g. a user shell) can be designed to accommodate both interfaces in a manner that is transparent to the user. However, there are clearly implementation costs and complexities that arise from the existence of this hybrid client/server interface. Yet another approach to enabling a user to initiate a request to a server and disconnect before receiving the results involves the use of "detached processes." Detached processes are essentially programs that receive requests from users on client machines that are eventually to be submitted to a server. The detached processes may, in turn, impersonate the users while submitting the received requests to the server and obtaining the results. The user, at some later time, obtains the results of the request through yet another procedure such as establishing a connection with the detached process in order to obtain the results of the request. It is noted that, the detached processes are constrained by the same request/response protocols of the clients. For example, the detached processes will likely maintain a continuous connection to the server while the server processes the request. The detached process approach, while providing a number of well known advantages over direct on-line connections to servers, have certain drawbacks. Since the detached process runs separately from the database server (either on the same machine or on a separate machine), there are processing, memory, and possibly network costs associated with sending the requests and results between the detached process and the database server. In addition, the detached process adds a separate element to the total system that must be monitored so that the detached process is always running and available to handle requests. Furthermore, the use of a detached process introduces yet another communication link in a network that must, in a secure network, be guarded. Thus, detached processes add complexity and administrative costs to the total system which, in some cases, are too prohibitive to justify implementation or use. Accordingly, it is an object of the present invention to provide an efficient networked system that processes user requests submitted to a network server, the results of which are typically viewed at a later time. Another object of the present invention is to provide a flexible client/server interface in a networked system enabling a wide variety of users to take advantage of the shared resources administered by the server. Another more specific object of the present invention is to provide a non-online client/server interface that provides a level of resource security equivalent to on-line interfaces. Another object of the present invention is to facilitate task scheduling by the server of user requests from connected client computers in a network, and thereby reduce the incidence of system bottlenecks that may arise with a server. Yet another object of the present invention is to simplify the user interface and implementation costs associated with providing a variety of methods for initiating, processing, and obtaining the results of a request from a client to a server. Additional objects, advantages and other novel features of the invention particularly pointed out in the appended claims will be apparent to those skilled in the art in view of the description that follows. The above described objects are met in a networked system enabling clients to submit requests to a server via electronic mail. The system includes a client having an electronic mail interface for submitting a request to the server. The client initially submits the request to an email address in an email system. The email address corresponds to a electronic mailbox designated for the server. An electronic mail interface in the server retrieves the request from the electronic mailbox. After the request is retrieved, an email processor interprets the contents of the request retrieved by the email interface, and submits an appropriate command request to a server request processor based upon the request. In accordance with an illustrative embodiment of the present invention, a task scheduler periodically invokes the email processor to process the retrieved electronic mail messages from clients and submit appropriate command requests to the server request processor. After the server request processor generates a response to the command request, the email processor builds a response electronic mail message, including the response to the command request, based upon information contained in a header for the email request. Finally, the email interface transmits the response via email addressed to an electronic mailbox designated for the user that initiated the request. Furthermore, in accordance with an illustrative embodiment of the present invention, the server is a database server and the email request comprises a database command. In the illustrative embodiment of the invention the database command comprises a database query in the form of a Structured Query Language (SQL) statement or stored procedure call. FIG. 7 is a flowchart summarizing the steps executed by the electronic mail processor shown in the block diagram of FIG. 2. Referring now to the figures, FIG. 1 schematically depicts an illustrative distributed processing network 10. The network 10 includes two local area networks (LANS), LAN A and LAN B. Each of these LANS includes a plurality of network client computers C1 -Cn. LAN A and LAN B are communicatively interconnected by a wide area network (WAN) link 12, and static routers 14 and 15 facilitate inter-network transfers of messages in a known manner between LAN A and LAN B. WAN links 16 and 18 communicatively link LAN A and LAN B to an electronic mail (email) system 20. The email system 20 enables users to send and receive messages via electronic files stored and maintained on the email system 20. More particularly, message packets may be sent and retrieved by various clients C1 -Cn as well as an SQL Server 22. The SQL server 22, in accordance with an illustrative embodiment of the present invention is a network server that is configured to respond to Structured Query Language (SQL) commands received from communicatively coupled client computers. It will be appreciated by those skilled in the art that the Clients C1 -Cn of LAN B may communicate with the server 22 through the static routers 14 and 15 and the WAN link 12. In accordance with the present invention, the SQL server 22 also maintains an account on the email system 20 referenced by an electronic mailbox address. Thus, the clients on both LAN A and LAN B may alternatively communicate with the SQL Server 22 via the email system 20. Indeed, through the instrumentalities of the illustrative embodiment of the present invention, clients submit queries to the server 22 via the email system 20. The server 22 periodically retrieves these queries from the email system 20 for processing. Once the queries are processed, the server 22 transmits the results back to the email system 20, where they may be retrieved by the clients at a later time. It should be noted that, for purposes of the preferred embodiment, the SQL server 22 refers to a server executing a particular software package by Microsoft® Corp. The Microsoft SQL Server is a multi-user database management service which allows a wide range of client applications and tools to share information safely, securely and effectively. Indeed the Microsoft SQL Server is supported by a number of front-end tools including spreadsheets, databases, development tools, and languages. However, consistent with the broader concepts and teachings of the present invention, the SQL server 22 is a specific example of a server that supports and executes requests submitted via electronic mail from the client computers. It is further noted that the present invention is applicable to a wide variety of alternative network topologies. In one such alternative network topology, the email system 20 and the SQL server 22 reside on the same LAN. Continuing with the description of the illustrated embodiment, reference will now be made to FIG. 2, which shows a functional partition of the SQL Server 22 and supporting components. The various discrete blocks in FIG. 2 reflect functional partitions accomplished by software implementation on actual computer hardware systems, rather than hardware partitions. Indeed, while it is typically true that the mail client 24, the electronic mail system 20, and the SQL server 22 are distinct hardware components in a network, these functional units may correspond to processes running on the same physical hardware component. For example, a single machine may support both the mail system 20 and the SQL server 22. A mail client 24 on a client computer is disposed for communication with the email system 20. The mail client 24 refers to that portion of a client process communicating with the mail system 20, and is specifically denoted as the mail client 24 for purposes of illustration, since the present invention generally concerns client-mail communication. Indeed, it is understood that a typical client possesses the capability to directly communicate with the SQL server 22 (for example, by way of LAN A) and to communicate with the email system 20. Direct communication with the server 22 is known and therefore is not specifically illustrated in the figures nor discussed herein. Similarly, the email system 20 is of conventional design and therefore will not be described in detail in view of the knowledge of those skilled in the art. The email system 20 usually is present in the form of a distinct network server. Electronic mail messages are organized in a standardized format, or packet structure. Discrete components of this packet structure include a header comprising a source identification, a destination identification, date/time of transmission, subject, recipients of copies, as well as other known control and status information components. Appended to the packet is the mail message itself. In accordance with the illustrative embodiment of the invention, the mail message is an SQL server query which, as will be described in more detail below, is interpreted and processed by the server 22. However, the mail message may also comprise a record to be added to the SQL database or a request to delete or update a particular record from the database. While the communication format and standards for the email system 20 are known and understood, it is significant to note that the present invention's utilization of the email system 20 realizes certain benefits, including system fault tolerance. For example, a break or disruption in a network link will not result in lost data. Instead, the information will merely be stored until the fault is repaired. In addition, an SQL client on a remote LAN can be configured to attempt to submit an SQL request to the SQL server 22 via the email system 20 when a WAN link connection becomes disrupted, or otherwise unavailable. An additional benefit includes a single type of interface (email) for transmitting requests and responses between a client and the server 22. The email system 20 is illustrated in communication with the SQL server 22. The principal components of the SQL server 22 include an email interface 30, an email processor 32, and a scheduler 34 that invokes a set of tasks performed by the email processor 32. The SQL server 22 also includes an SQL Request processor 36 of known design. These principle components of the SQL server 22 cooperate to retrieve email messages addressed to the SQL server 22 in the email system 20, process the received email messages periodically under the control of the Scheduler 34, and if required, build and issue appropriate email messages containing the results of processed requests to the email system 20. The email interface 30 is configured, for example by an administrator of the SQL server 22, to periodically logon to the email system 20 in order to check the electronic mail box of the SQL server 22. While logged onto the email system 20, an email receiver 38 reads all of the email messages currently stored in the email account for the SQL server 22. The email receiver 38 stores all of the previously unread messages in a received email message queue, also referred to as a mail "inbox" (not shown) in the SQL Server 22 for later access by the email processor 32. The mail "inbox" typically resides on a hard disk storage device associated with the SQL server 22. While logged onto the email system, the email interface 30 also determines whether any email response messages are present in an email transmit message queue, also referred to as a mail "outbox" (not shown) in the SQL server 22. The mail "outbox" typically resides on the hard disk storage device of the SQL Server 22. If the mail outbox is not empty, then the email transmitter 40 of the email interface sends the email messages in the mail outbox to the email system 20. The operation of the email interface is illustrated in FIG. 6 described below. The above described email receive and send operations are carried out in accordance with known email protocols prescribed by the email system 20. In the illustrative embodiment of the present invention, the mail interface 30 supports the well known Messaging Application Programming Interface (MAPI). However, other mail interfaces may be used in accordance with alternative embodiments of the invention as long as the ability is maintained for the SQL server 22 to receive email messages. Database servers, such as the SQL server 22 generally do not have the capability to make direct calls to an email interface 30. However, the SQL server 22 includes a known mechanism for invoking external functions implemented according to a defined format called "extended stored procedures". The email processor 32 thus includes a stored procedure that invokes a set of extended stored procedures that provide an interface to bridge the operational gap between the SQL request processor 36 and the email interface 30. In this regard, the email processor 32 comprises a dynamic link library (dll) of extended stored procedures for facilitating finding a next message stored in the mail "inbox" of the SQL server 22, reading the message, interpreting the message (by means of an SQL request interpreter 42) in order to convert the message into a proper SQL request format, and submitting the interpreted request to the SQL request processor 36 in the SQL request format. The email processor 32 also includes a function for deleting the message from the mail "inbox" after submitting the request to the SQL request processor 36. However, in an alternative embodiment of the present invention, the email processor 32 does not delete an email message if the email message designates that it should not be deleted. Such an arrangement facilitates the periodic execution of a same request without a user having to re-submit the request. The email processor 32 also receives responses from the SQL request processor 36 corresponding to previously submitted SQL requests. After receiving a response, an email response builder 44 formulates an email response message. The receiver of the email response is designated based upon the identity of the user identified in the sender field of the email request to which the SQL response corresponds. In addition, "cc:" copies are designated in the email response message based upon the contents of the "cc:" field in the email request. In addition, the message may include an attached file which is designated in the response email via an option in the interface to the email transmitter 40. The email generally sends SQL query results as an attached file in a well-known format such as a spreadsheet or ASCII text. The well-known formats allow the mail client 24 to view the query results using standard application software such as a spreadsheet program or text editor. After building the email response message, the email processor 32 via an invoked extended stored procedure, places the email message in the mail "outbox" associated with the email transmitter 40. The email transmitter 40 sends email responses stored in the mail "outbox" to the electronic mailboxes within the email system 20 corresponding to the users that originally submitted the SQL requests via email as well as any valid cc'd users. As previously explained, the email interface 30 performs its logon and email read and send operations on a periodic basis. In the illustrative embodiment of the present invention, the email processor 32 is provided with such capabilities as well, and indeed, these scheduling capabilities are expanded to include the capability of performing email request message filtering on the received email messages residing in the mail inbox of the SQL server 22. This capability is facilitated by the Scheduler 34 that governs not only when email SQL requests in the mail "inbox" are processed, but also which requests will be processed and how their results will be formatted in corresponding email responses. For example, the Scheduler 34 may invoke a task every 10 minutes to specifically search for email messages including "SQL: spreadsheet" in their "subject" field (explained below), and return results for such request in a "spreadsheet" format. Generally, the Scheduler 34 invokes a set of tasks (described below in conjunction with FIG. 4) programmed by an administrator of the SQL Server 22 for processing the received email in the mail "inbox" of the SQL server 22 on a scheduled basis. In the illustrative embodiment of the invention, the administrator designates, for each task in the task list, whether a task will be executed just once, on demand, or periodically. If the task is executed periodically, then the administrator also programs a frequency at which the task is invoked. Examples of frequencies include monthly, weekly, daily, hourly, or even every "x" minutes. The programmed tasks of the Scheduler 34 also include a "start time" and an "end time". The start time designates when the task is invoked for the first time in a given day, and the end time designates when the task will be disabled. The programmed tasks also include a "start date" and "end date". These task descriptors identify the date when the task will first be invoked and the date in which the task will be inactivated. The task will however remain in the task list of the Scheduler 34 in its disabled state. Turning now to FIG. 3, a set of fields are schematically depicted that are included in an email message for use in conjunction with the present invention. In particular, the illustrative email message includes a sender field 50 designating the email account from which the email SQL request originates. As previously mentioned, the email processor 32 saves this value when processing an email request message from the mail "inbox" in order to later designate a proper receiver for the SQL server response. In a secure server environment, the SQL server may incorporate security procedures that use the contents of the sender field 50 to determine, using an appropriate verification mechanism, whether the identified sender is authorized to submit the SQL request. Such security mechanisms would be known to those skilled in the art. The email messages also include a receiver field 52. In the context of the present invention, the receiver field of messages contained in the mail "inbox" contains the account name for the SQL Server 22. In alternative embodiments of the present invention, the SQL Server 22 may support a plurality of account names associated with various functions and services provided by the SQL Server 22. The receiver field 52 in a response email message from the SQL Server 22 is filled with the account name of the user that originally submitted an email SQL request to the SQL server 22. The email messages also include a standard Date/Time field 54 identifying when an email message was sent to the electronic mailbox of the identified receiver in the email system 20. Such a field can be utilized by the SQL Server 22 to identify "stale" email requests in the mail "inbox". A subject field 56 is a standard email message field utilized by senders of email to identify the general subject matter to which the email message pertains. However, in the illustrative embodiment of the present invention, the subject field is utilized in conjunction with "filters" associated with specialized tasks invoked by the Scheduler 36 (described above) in order to process the email request in a specific manner. The users of the email interface for submitting SQL requests to the SQL server 22 follow a "filtering" standard designated by the administrator in charge of programming the tasks invoked by the Scheduler 36. As a result, the users designate processing of their email SQL requests by the SQL server 22 in a certain manner by simply entering a proper sequence of characters in the subject field 56 of an email SQL request. For example, a user might wish to receive the results of a email SQL request in the form of an attached spreadsheet file suitable for use with EXCEL (Trademark Microsoft Corporation). In this case, the user submits an email request and includes the text: "SQL: spreadsheet" in the subject field 56. A programmed task invoked by the Scheduler 36 having a filter corresponding to the "SQL: spreadsheet" character string identifies the SQL request in the mail "inbox", processes the SQL request, and returns a response in the EXCEL spreadsheet format. Another example is a user requesting to receive results simply as a text file within the mail message itself. In this case, the subject field 56 would read "SQL: text". Of course, a wide variety of filters and resulting specialized processing of email SQL requests designating the filters is possible, and is generally only limited by the processing capabilities of the SQL server 22. In addition, while it is often easiest to designate the "filters" in the subject field 56, this information may alternatively be provided in an email message type field 58. Under this approach, the email processor 32, in accordance with the filtering criterion specified by an invoked task, searches the email message type field 58 of email messages in the mail "inbox" in order to apply the filtering criterion. Filters on the Sender field 50 and the email message text field 62 are also possible in alternative implementations. In the illustrative embodiment, a copy list field 60 of an email message is utilized by the SQL processor 22 to distribute copies of the response to the email SQL request via a known "cc:" designation code in the email response to the sender of the email SQL request. As a result, any account that was provided the email SQL request (via the "cc:" email command) will also receive a copy of the email response submitted by the SQL server 22 to the email system 20. Finally, an email message text field 62 includes an actual SQL command set forth in a manner usually designated by on-line users of the SQL Server 22. In the illustrative embodiment, the SQL command may consist of a SELECT statement that returns a query result set. Additionally, the SQL command may comprise an INSERT, an UPDATE, or a DELETE command in order to modify certain information within a database. The SQL command may also comprise a request to execute an identified stored procedure for performing a combination of data retrieval, manipulation, and modification steps. Having described the primary fields of an email message and their function in the illustrative embodiment of the present invention, attention is now directed to FIG. 4 comprising fields included in a task record. As explained above, each task is separately invoked by the Scheduler 36 to traverse and selectively process the set of email SQL requests stored in the mail "inbox" of the SQL server 22. A number of the fields have been described above and therefore will not be discussed here in view of the descriptions that accompany the listed fields. It is further noted that the "command field" stores the "processmail" command issued by the Scheduler 36 commencing execution of the task by the email processor 32. In addition, the filter values, also stored in the command field, are passed as parameters associated with the processmail command. In addition, an exemplary administrator interface is provided in FIG. 5 for defining tasks in accordance with the illustrative embodiment of the present invention. Reference is now made to the flowcharts in FIG. 6 and FIG. 7 respectively summarizing the steps comprising the principle operation of the email interface 30 and the email processor 32. The email interface 30 periodically checks the electronic mail box assigned to the SQL server 22 in the electronic mail system 20 for received SQL requests and sends completed responses from the SQL server 22 to designated recipients in the email system 20. The email processor 32 retrieves email messages placed by the email interface into the mail "inbox" of the SQL server 22, initiates the processing of SQL commands embedded within the email messages, and returns response messages to the mail "outbox" of the SQL server 22. Turning now to FIG. 6, it is noted that in the preferred embodiment of the invention, the SQL server 22 maintains a constant connection with the email system 20. Therefore, after initially logging onto the email system 20, the SQL server 22 only re-establishes a connection and logs onto the email system if the connection is interrupted. In other embodiments of the invention, however, if the cost of maintaining a constant connection to the email system 20 is expensive, then additional procedures may be implemented for terminating the connection between the SQL server 22 and the email system 20 during periods of low usage. At step 100 if the email interface 30 determines that the SQL server 22 is not connected and logged onto the email system 20, then control passes to step 102 wherein the email interface 30 establishes a connection and logs onto the email system 20. Otherwise, if the SQL server 22 is currently connected and logged onto the email system 20, then control passes directly to step 104. At step 104, the email interface 30 checks the mail "outbox" that serves as the depository of SQL results which have been converted by the email processor 32 into email response files. If email response files are stored in the mail "outbox" of the SQL server 22, then control passes to step 106. Each email response file, in addition to having an appropriately formatted response message, contains designated recipients including the originator of the request as well as any appropriate additional recipients designated in the "cc:" field 60 of the original email SQL request. Each email response file also includes text for insertion into the subject field of the email. At step 106, the email interface 30 transmits the email response files containing the responses by the SQL server 22 to the accounts of the designated recipients. After the email transmitter 40 of the email interface 30 has transmitted all of the mail messages contained in the mail "outbox", control passes to step 108. If no messages were currently stored in the mail "outbox" during step 104, then control passes directly to step 108. At step 108, if the SQL server 22 has received new email, then control passes to step 110. At step 110, the email receiver 38 reads the new email into the mail "inbox" of the SQL server 22, and control passes to step 112. However, at step 108 if there is no new email in the SQL server 22's electronic mailbox, then control passes directly to step 112. The email interface 30 is programmed to execute the steps summarized in FIG. 6 in a continuous loop on a delayed basis. Therefore, at step 112 the email interface 30 resets a delay period timer for returning to step 100 when a programmed time interval has elapsed since the email interface 30 last logged onto the email system 20. This delay period, may, however, be interrupted by a request to send an email result generated by the email processor 32 and placed in the mail "outbox" of the SQL server 22. FIG. 7 illustratively depicts the steps performed by the email processor 32 in accordance with a processing criterion provided by a task invoked by the Scheduler 34. In particular, at step 202, in response to receiving a task description, the email processor 32 traverses the set of received email SQL requests in the mail "inbox" of the SQL server 22 in search of an email request having a subject field 56 meeting a filtering criterion associated with the task. The search continues at step 202 until the email processor 32 either identifies an email request meeting the filtering criterion, or the email processor 32 reaches the end of the list of email requests stored in the mail inbox. Control then passes to step 204. At step 204, if there are no unprocessed email messages in the mail "inbox" of the SQL server 22 meeting the task's filtering criterion, then control passes to an End step 206 wherein the task is terminated after information concerning the results of the invoked task have been recorded for review by the SQL server 22 administrator. However, at step 204 if an email SQL request meeting the filtering criterion is identified, then control passes to step 208. At step 208, the email processor 32 removes the identified email SQL request message from the mail inbox, and extracts information from the email message comprising the SQL command that is to be passed on to the SQL request processor 36. Generally the entire message text is passed on to the SQL request processor 36, but it is also envisioned that in some instances, the email processor 32 will perform pre-processing of the message text. For example, some email systems insert message routing information at the beginning of the message text. The email processor 32 could remove such routing information from the message text before submitting the SQL command to the SQL request processor 36. After completing the above described pre-processing, at step 210 the email processor 32 submits the SQL command to the SQL request processor 36. The SQL request processor 36 is ultimately responsible for determining whether a submitted command is valid. The operations for checking the validity of a request could include determining whether the identified user account with which the email message is associated is authorized to submit the SQL command. In addition, the request must be syntactically correct and must refer correctly to objects (such as tables) that exist in the database associated with the SQL request processor 36. At step 212, if the email message is a valid request, then control passes to step 214. It is noted that the email SQL requests typically contain embedded SQL query commands, and the results returned by the SQL request processor comprise the results of a database query. However, it is envisioned that the email requests will also include database insertion commands for adding a record (attached to the email message) to an identified database, or commands to delete or update existing records. The benefits of such an expansion of the functionality of an email-based interface for a database server include the ability to utilize customized email forms built with generally available email form software tools to submit database record information in an easy to parse, standardized format. Furthermore, customized tasks are easily added to list of tasks invoked by the scheduler 34. Therefore, the email interface provides a unified interface for not only submitting and receiving queries, but also building and updating the records of the database maintained by the SQL server 22. At step 214, the email processor 32 formats the command results returned by the SQL request processor into the appropriate form for the recipients of the email response (as specified by the parameters which accompanied the email request). An example of such a parameter, discussed above, is a directive for the email processor 32 to format the result set of an SQL query as an attached spreadsheet file. Returning briefly to step 212, if a request in an email message is not valid, then control passes to step 218. At step 218, the email processor 32 formats an error message returned by the SQL request processor 36 to be appended to a response email message. Control then passes to step 216. At step 216, the email response builder 44 generates a response email message to be issued to the originator of the corresponding email SQL request. The response email message designates the primary receiver of the email response (the originator of the request) as well as any other "copied" email accounts originally designated in the "cc:" field. In an embodiment of the invention that implements security mechanisms, the SQL server 22 could check each account specified for receiving the results of the SQL request to determine whether the account is an authorized recipient of the SQL response embedded within the email response. The response email message also includes the results (formatted during step 214) of the executed SQL command. Next, at step 220, the email processor 32 writes the response email message and any necessary header information to the mail "outbox" of the SQL server 22. Control then returns to step 202 wherein the email processor again traverses the mail "inbox" of the SQL server 22 for another received email message meeting the task's filtering characteristics. As mentioned above, the concepts and teachings of the present invention extend well beyond processing database queries. That is, it may also be desired to utilize the present invention for executing commands to add, update or delete a record from the database. To illustrate this point, consider a database storing a list of customers on a particular mailing list, for example. A particular user (e.g., employee of catalog company) may, at a remote terminal, input user information that is to be added to the database. Rather than requiring the user to establish an on-line connection to the database in real time, the present invention may be utilized to allow the user to submit the information via an electronic mail system. In this regard, the user submits a command to the database server to add certain information (i.e., customer information). The database server, upon receiving this message from the electronic mail system executes the command by adding the information to the database. If appropriate, a response confirming the addition of the information to the database may be returned to the user via email. While the foregoing provides one alternative example of a contemplated use for the present invention, it will be appreciated that numerous other uses may be achieved. The above description of various preferred embodiments of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obvious modifications or variations are possible in light of the above teachings. The embodiments discussed were chosen and described to provide the best illustration of the principles of the invention and its practical application to thereby enable one of ordinary skill in the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. All such modifications and variations are within the scope of the invention as determined by the appended claims when interpreted in accordance with the breadth to which they are fairly, legally, and equitably entitled. (c) a message queue means within the networked database server disposed for communication with the electronic mail interface, the message queue means configured to store both incoming messages received from the electronic mail system and outgoing messages built by the processing means. 2. The system according to claim 1, wherein the processing means controls the mail interface to delete messages from the message queue. 3. The networked system of claim 1 further comprising a scheduler for invoking, on a scheduled basis, the operation of the processing means for extracting the query from the email message. the scheduler comprises a set of tasks performed by the processing means, and each task includes at least one filter for selectively processing ones of the stored email messages. 5. The networked system of claim 4 wherein the at least one filter includes a format code for instructing the execution means to submit the results of an executed query in a specified format. 6. The networked system of claim 5 wherein the query submitted by the client is a database query. 7. The networked system of claim 6 wherein the database query is a Structured Query Language (SQL) query. 8. The networked system of claim 5 wherein the query submitted by the client comprises a query to modify information associated with the networked database server. 9. The networked system of claim 8 wherein the query to modify information comprises a command to add information to the networked database server. 10. The networked system of claim 8 wherein the query to modify information comprises a command to delete information from the networked database server. (c) a message queue disposed for communication with the electronic mail interface, the message queue configured to incoming messages received from the electronic mail system. 12. The networked system of claim 11 further comprising a scheduler for invoking, on a scheduled basis, the operation of the processing means for extracting the command from the email message. 14. The networked system of claim 13 wherein the at least one filter includes a format code for instructing the execution means to submit the results of an executed command in a specified format. 15. The networked system of claim 14 wherein the command submitted by the client comprises a command to modify information associated with the networked database server. 16. The networked system of claim 15 wherein the command to modify information comprises a command to add information to the networked database server. 17. The networked system of claim 16 wherein the command to modify information comprises a command to delete information from the networked database server. Allard et al., "Windows NT and the Internet", pp. 1-20, Jan. 1994. Comer, "A Guide to RFCs," in Internetworking with TCP/IP, vol. I (Second Edition), Appendix 1, 441-447 (Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1991). D. Green, S. DiPiazza, M. Landry. The Region 3 Electro-Technology Industries Database. Proceedings IEEE Southeastcon '95. Mar. 95, pp. 372-375. Debra, "Inside Novell NetWare", pp. 338-343, Jan. 1992. Microfsoft "Windows for Workgroups Resource Kit" Version 3.1, Jan. 1992. Microsoft "Electronic Mail for PC Networks" Microsoft Mail, Jan. 1992. Microsoft Schedule+, 7.0a, copyrigth 1992-1996 Microsoft Corporation (Actual Screen dump of Schedule+program), Jan. 1992.
2019-04-19T09:02:40Z
https://patents.google.com/patent/US5826269A/en
Administrative Law Judge Debra Huston, Office of Administrative Hearings, Special Education Division, State of California, heard this matter in Clovis, California, on October 15 through 19, 2007. Petitioner (Student) was represented by Elaine Yama, Attorney at Law. Student’s mother (Mother) and father (Father) attended the hearing on all days, with the exception of brief absences by Father. Respondent Clovis Unified School District (District) was represented by Karen Samman, Attorney at Law. District Special Education Coordinator Janet Van Gelder attended all days of the hearing, with the exception of brief absences. Student filed the due process complaint in this matter on June 20, 2007. Student’s request for a continuance was granted on August 1, 2007. At the close of hearing on October 19, 2007, the parties’ request for the opportunity to file written closing arguments was granted, and closing briefs were filed and the matter submitted on November 16, 2007. The parties stipulated that the decision would be due 30 days after the submission of closing briefs. 1. Did District have a child find obligation to assess Student in areas of suspected disability, including visual-motor integration, writing, working memory, social/emotional functioning, and behavior, from June 20, 2005 through June 20, 2007? 2. Did District fail to assess Student in areas of suspected disability, including visual-motor integration, writing, working memory, social/emotional functioning, and behavior, as part of its May/June 2006 assessment? 3. Did District deny Student a free appropriate public education by failing to find him eligible for special education and related services under the category of specific learning disability (SLD) or other health impairment (OHI) from June 20, 2005, through June 20, 2007? 1 Issues 1, 2, and 3 are those framed by Student’s counsel and agreed upon by District’s counsel at the prehearing conference and specified in the August 21, 2007, Order Following Prehearing Conference. Although the Order Following Prehearing Conference in this matter stated the issues as two, rather than three, the issues have been reformulated, without changing their substance, for purposes of organizing this Decision. 2 Student raises a number of additional contentions in his closing brief, which was filed on November 16, 2007, after the hearing in this matter. These additional contentions were not raised by Student in his complaint, they were not discussed during the prehearing conference when the issues Student wished to pursue at hearing were clarified, and they were not heard at the hearing. Therefore, those issues are not determined in this Decision. 3 Although Student had requested reimbursement for additional assessments in his complaint (those of Dr. Howard Glidden and Dr. Dawn Aholu), during the prehearing conference those additional requests were not pursued by Student, as reflected in the Order Following Prehearing Conference, because Student had not pleaded in his complaint a claim for reimbursement for independent educational evaluation (IEE). Therefore, reimbursement for an IEE was not an issue at hearing. 5 This information is included for background purposes, as Student’s claims date back only to June 20, 2005, which is two years prior to the filing of his complaint. 7 Scores that fall in the “at-risk” range suggest a significant problem that may not be severe enough to require formal treatment or a potential developing problem that needs careful monitoring. 8 Ms. Anderson made no errors on the BASC in the areas of adaptability and aggression. With regard to attention, the scoring errors actually improved Student’s profile, but Ms. Anderson did not know if it would change his score in that area. However, attention was identified as a primary issue for Student so, according to Ms. Anderson, her recommendations would not have been different. With regarding to atypicality, her scoring errors made Student’s atypicality seem more severe and, thus, the mistakes result in an overestimation of Student’s atypicality. Student’s rating in the area of atypicality was still not a concern. Ms. Anderson made two scoring errors in the area of depression, but she doubted it would change Student’s score. Ms. Anderson made three scoring areas in the area of leadership, but these errors would not change his score. Ms. Anderson made three scoring errors in the area of social skills, but she could say if those errors would have changed his scores. Ms. Anderson made two scoring errors in the area of somatization, but these errors would have made Student’s score in that area higher than it actually was, but that area was still not high enough to cause concern. Ms. Anderson established that there were no scoring errors in the areas of conduct problems, hyperactivity, or withdrawal. Overall, Ms. Anderson did not know what the difference in Student’s BASC scores would be in absence of the errors. However, she believed that her recommendations would be the same without the errors because the errors were not significant to her conclusions and she did not rely solely on the BASC in reaching her recommendations. In addition, Student was making progress in the general education program and benefiting from services and accommodations provided, and he was making progress behaviorally and academically. 9 In his closing brief, Student contends the District’s assessments were not appropriate. The appropriateness of the assessment was not an issue claimed in Student’s due process complaint, was not discussed at the prehearing conference when Student was given the opportunity to clarify issues, was not an issue included in the August 21, 2007, Order Following Prehearing Conference, was not an issue at heard at the hearing. Therefore, it is not an issue that will be determined in this Decision. 10 On October 19, 2007, the final day of hearing, Ms. Dolin testified that the protocols from her assessment of Student were at her office. Student’s counsel requested that Ms. Dolin’s testimony be stricken because District had not disclosed Ms. Dolin’s protocols to Student, even though Student had requested them a number of times. The ALJ ordered District to provide Ms. Dolin’s protocols to Student that day, and the protocols were copied and provided to Student’s counsel. After a lengthy discussion between the ALJ and counsel, it was stipulated that Student’s counsel would be given one week to review the protocols and determine if she required an additional day of hearing to further cross-examine Ms. Dolin or to call another expert to testify regarding Ms. Dolin’s protocols. The ALJ ordered Student’s counsel to inform District’s counsel by October 26, 2007, if she wished to have an additional day of hearing. The parties stipulated that if an additional day of hearing was required by Student, that day of hearing would be held on November 2, 2007. Neither OAH not District’s counsel received notice from Student’s counsel that she wanted an additional day of hearing. On October 23, 2007, Student’s counsel filed a motion to reopen the hearing to call another witness, unrelated to Ms. Dolin’s protocols. That motion was denied on October 30, 2007, and the stipulated procedure for additional testimony relating to Ms. Dolin’s protocols was restated in the order and served by facsimile on Student’s counsel that day. Late in the afternoon on November 2, 2007, Student’s counsel filed a notice of additional testimony regarding Ms. Dolin’s protocols, and scheduled it for November 5, 2007. OAH treated the notice as a motion to reopen the record, and denied the motion as untimely. In her closing brief, Student’s counsel argues that there were a number of scoring errors in Ms. Dolin’s protocols, that her assessment in the areas of visual-motor skills and sensory processing are invalid, and that the visual-motor testing completed by Ms. Anderson and Dr. Glidden appear to be a more accurate reflection of Student’s visual-motor skills. Evidence of errors in Ms. Dolin’s protocols, if any, was not presented during the hearing in this matter. Moreover, as is determinedinfra, Ms. Dolin’s findings regarding Student were consistent with those of Student’s expert, occupational therapist Dr. Dawn Aholu. 11 Although Dr. Aholu recommended outpatient occupational therapy two times a month for eight visits to remediate the concerns she noted in her report, she did not make recommendations for school and did not contact the school. Dr. Aholu provides therapy on an outpatient basis, and has never worked for a school or as an OT in an educational setting. Student’s parents never scheduled the appointments, and Student did not receive the therapy Dr. Aholu recommended. In a clinic-based setting, an occupational therapist can address difficulties in daily life reported by the parents, but that is not the function of schools. 12 Although appropriateness of assessment is not an issue in this matter, it is noteworthy that Student’s expert, Dr. Patterson, administered the Bender Visual Motor Gestalt Test-II to Student on August 25, 2007, and determined that Student’s psychomotor functioning is in the 88th percentile, based on a common standard score of 118 . 13 Dr. Patterson is a psychologist in private practice with 40 years of experience assessing children. Dr. Patterson has been a teacher, a school counselor, a school administrator, and a school psychologist. He holds several academic degrees, including a master’s degree in developmental psychology and a doctorate in clinical psychology. Dr. Patterson is a licensed psychologist, a licensed educational psychologist, a nationally certified school psychologist, a licensed marriage, family, and child therapist, a nationally certified counselor, and a nationally certified gerontological counselor. Dr. Patterson belongs to the Prescribing Psychologists Registry in Guam and in Colorado, and is authorized to bill for medication management services in those jurisdictions. Dr. Patterson has taught college-level classes in conducting assessments, and has conducted thousands of assessments of children for schools, regional centers, and the California Youth Authority. Dr. Patterson produces approximately 150 assessment reports each year. Dr. Patterson ran a clinic for children with ADHD, has assessed many children with this disorder, and is very familiar with the disorder. Dr. Patterson assessed Student on August 25, 2007, and administered a number of instruments, including the Woodcock-Johnson Psycho-Educational Test Battery Test of Cognitive Abilities – Third Edition (WJ-III), which is a test of general intellectual ability (Dr. Patterson administered 19 different subtests of the WJ-III that measure various processing abilities); the Wide Range Achievement Test – Fourth Edition (WRAT-4), a test of achievement functioning; the Peabody Individual Achievement Test – Revised (Nu Form) (PIAT-R(nu)), a test of achievement functioning; the Bender Visual Motor Gestalt Test – Second Edition (Koppitz-2), a test of psychomotor functioning; the Behavior Regulation Inventory for children – Second Edition (BRIEF), which is a test of attentional functioning; the Conners’ Parent Rating Scale – Revised: Long Form (CPRS-R: L), which is an assessment of attentional functioning; the Detroit Test of Learning Aptitudes Motor Speed and Precision Test (DTLA) (Dr. Patterson administered this test when Student was on medication and also when he was not on medication), which is a test of attentional functioning; the Cancellation of Rapid Recurring Target Figures (CRRTF); the Adaptive Behavior Inventory (ABI), which is a test of adaptive functioning; and the Personality Inventory for Children – 2nd Edition (PIC-2), which is a test of social-emotional functioning. Dr. Patterson also reviewed all of the assessment reports regarding Student, and he observed Student’s behavior in his office. Dr. Patterson diagnosed Student with ADHD: Combined Type (With Comorbid Oppositionality), and also with a disorder of written expression. 14 Although District did not have Dr. Patterson’s August 2007 report at the time it conducted its May 2006 assessment, it is noteworthy that Student’s expert, Dr. Patterson, tested Student in the area of working memory in August 2007 and determined that on the working memory tasks from the WJ-III, Student “was performing well into the average range.” Student’s overall broad processing cluster for working memory was a common standard score of 103, which is at the 57th percentile. Student received a standard score of 104, which is at the 60th percentile on the Auditory Work Memory task administered by Dr. Patterson. On the numbers reversed subtest, Student received a common standard score of 101, which is at the 53rd percentile. 15 Scores that fall in the “clinically significant” range suggest a high level of maladjustment. 16 Student did not contend in his complaint or at hearing that he was eligible for services under any other category of eligibility, such as emotional disturbance. Mother felt strongly that Student was not eligible under that category. Therefore, Student’s eligibility under that category is not considered in this Decision. 17 SLD eligibility may be found by either of two methods: the “severe discrepancy” method or the response to intervention (RTI) method. “When determining whether a child has a specific learning disability…a local educational agency shall not be required to take into consideration whether a child has a severe discrepancy between achievement and intellectual ability in oral expression, listening comprehension, written expression, basic reading skill, reading comprehension, mathematical calculation, or mathematical reasoning.” (20 U.S.C. § 1414(b)(6)(A); see also 34 C.F.R. 300.309 (b); Ed. Code, § 56337, subd. (b).) Instead, “a local educational agency may use a process that determines if the child responds to scientific, research-based intervention as a part of the evaluation procedures … .” (20 U.S.C. § 1414(b)(6)(B); see also Ed. Code, § 56337, subd. (c).) Accordingly, the RTI method of determining SLD is not a test or procedure that must be conducted with every child who has a processing disorder, but instead is a way that a local educational agency may determine eligibility based on an underachieving child’s response to scientific, research based interventions conducted in the classroom. Student presented no evidence regarding the RTI method of determining SLD eligibility and did not argue in his closing brief that he was eligible under the RTI method. 18 In addition, according to Dr. Patterson, Student also qualifies under OHI because of motor speed and accuracy issues; however, as discussed above, Dr. Patterson’s determination regarding processing speed deficits was not credible. 19 Decision speed is measured by having the student match visual concepts, or two pictures. Auditory and verbal information are minimized with this subtest. The subtest measures the ability to process visual information quickly, and gives information as to one’s ability to acquire a symbol system, such as language, and a deficit in this area could interfere with one’s ability to learn to read or comprehend. 20 The federal regulations within Title 34 Code of Federal Regulations part 300 were amended effective October 13, 2006. The federal regulations cited herein do not substantively differ from their predecessor, although the new federal regulations are numbered differently than the old federal regulations. The citations herein are to the new regulations. Student contends that the District had a child find obligation to assess him in areas of suspected disability, including writing, visual-motor integration, working memory, social/emotional functioning, and behavior, from June 20, 2005 through June 20, 2007. The District disagrees, contending that it assessed Student in these areas of suspected disability when it had reason to suspect that Student had a disability in those areas and may have needed special education. Student further contends that when District assessed him in May and June 2006, it failed it assess him in areas of suspected disability, including writing, visual-motor integration, working memory, social/emotional functioning, and behavior. District contends it assessed Student in all these areas. Student further contends that District denied him a free appropriate public education by failing to find him eligible for special education and related services under the categories of SLD and OHI. Student contends that he is eligible for special education under the category of SLD because he has an attention deficit disorder and also processing speed disorders, and that he demonstrates a severe discrepancy between intellectual ability and academic achievement as measured by standardized testing. Student contends that he is eligible for special education under the category of OHI because he suffers from an attention deficit disorder that interferes with his ability to complete schoolwork. With respect to both SLD and OHI, Student contends that his disability adversely affects his educational performance in that he has an inability to write and lacks sufficient processing skills to achieve in the area of written language, and because his behaviors that result from his ADHD (aggressive behaviors and difficulties with work completion) result in his removal from the classroom and he is losing learning opportunities when he is not in the classroom. Therefore, Student contends that he requires special education and related services. District contends that Student is ineligible under the category of SLD because he does not have a severe discrepancy between intellectual ability and academic achievement, and that even if there were such a discrepancy, Student does not require special education because he is making educational progress in the general education classroom, he is achieving academically in the average to above average range, he is earning average to above average grades, and he is scoring in the proficient and advanced ranges on the California Standardized Testing and Reporting (STAR) test. District contends that Student is ineligible under the category of OHI because his educational performance was not adversely impacted by his ADHD, and that even if it were, he does not require special education because, as stated above, he is progressing academically in the general education classroom. District acknowledges that Student has difficulties with completing work, that he has behavior problems, and that he is noncompliant with nonpreferred activities, such as writing. However, District contends that Student is able to write, but chooses not to at times. District further contends that it is addressing Student’s work completion and behavior issues through a plan pursuant to Section 504 of the Rehabilitation Act of 1973 (504 Plan),4 and that the Section 504 plan is working. Student filed his due process complaint on June 20, 2007, and OAH set the due process hearing for August 28, 2007. The prehearing conference was held on August 20, 2007, and, during that conference, the due process hearing was continued on Student’s motion, to commence on September 12, 2007. The grounds for the request for continuance were that Dr. Robert Patterson, Student’s expert, had not completed his assessment report regarding Student and that Dr. Patterson was unavailable during the week scheduled for hearing. The continuance was granted. Dr. Patterson did not assess Student until August 25, 2007, which was four days after the prehearing conference. On September 6, 2007, the parties filed a joint request to postpone the hearing one day because Ms. Yama had not served Student’s evidence on District five business days prior to the hearing, as required by law. The joint request to postpone the hearing was granted. On September 13, 2007, when the ALJ and parties were assembled in Fresno for the due process hearing, Ms. Yama still had not disclosed Dr. Patterson’s report to District. Since Ms. Yama had not provided Dr. Patterson’s report to District, as was previously instructed, District and the ALJ assumed Ms. Yama no longer intended to call Dr. Patterson as a witness. However, at the commencement of the hearing, District and the ALJ were informed by Ms. Yama that this was not the case. Ms. Yama intended to call Dr. Patterson even though she had not disclosed his report, and she intended to elicit testimony regarding his assessment of Student. Ms. Yama’s explanation for not having disclosed the report was that Dr. Patterson’s typist had been ill, and the report had not been typed. However, Ms. Yama had not informed District of this fact prior to the hearing, and she had not informed OAH. Nor had she requested a continuance. District moved to bar testimony by Dr. Patterson based on the fact that his report had not been disclosed to District, as required by law. In addition, District contended, if Dr. Patterson was not going to produce a report, District would be entitled to receive Dr. Patterson’s testing protocols if Student intended to elicit testimony regarding the testing. The ALJ ruled that although Dr. Patterson would be permitted to testify as an expert as to general matters, he would not be testify as to his assessment of Student because neither his report nor his assessment protocols had been disclosed to District. Ms. Yama requested another continuance, which District did not oppose. The request for continuance was granted. Thus, the hearing did not go forward on that day, as scheduled. On October 9, 2007, District filed a Motion for Sanctions requesting an order for legal fees, in the amount of $783, incurred by District in participating in the due process hearing on September 13, 2007. The amount was based on an hourly rate of $174, and 4.5 hours of Ms. Samman’s time. On October 12, 2007, Student filed its Objection to District’s Motion for Sanctions. Ms. Yama contended in her opposition that she was required to disclose Dr. Patterson’s report only if it was available five business days prior to the hearing, and it was not. Otherwise, she was not required to disclose it. Prior to the commencement of hearing on October 15, 2007, Ms. Yama disclosed Dr. Patterson’s report, and Dr. Patterson testified at the hearing. The ALJ ruled that District’s Motion for Sanctions would be determined in this Decision. days prior to the hearing pursuant to paragraph (7) of subdivision (e) of Section 56505.” (Ed. Code, § 56505.1, subd. (f).) This ensures the right to confront and cross examine the witness, pursuant to Education Code section 56505, subdivision (e)(3). a party to pay reasonable expenses, including attorney’s fees, incurred by another party as a result of bad faith actions or tactics that are frivolous or solely intended to cause unnecessary delay, as defined in Code of Civil Procedure section 128.5. Code of Civil Procedure section 128.5, subdivision (b)(2), states that “frivolous” means totally and completely without merit or for the sole purpose of harassing an opposing party. Dr. Patterson was Student’s key expert in this matter, and there was no reasonable basis in law on which Ms. Yama could have determined that she could appropriately circumvent disclosure requirements and elicit testimony from Dr. Patterson regarding his assessment of Student without disclosing Dr. Patterson’s report or testing protocols. Ms. Yama’s actions created a risk for her client that the ALJ would bar Dr. Patterson from testifying on behalf of Student, and that was a risk that Ms. Yama was apparently willing to take when she appeared at hearing on September 13, 2007, ready to proceed even though she had not disclosed Dr. Patterson’s report or testing protocols. Therefore, her actions cannot be considered to be in bad faith or solely intended to cause unnecessary delay. Therefore, District’s motion for sanctions is denied. 1. Student is 10 years of age and resides with his father (Father) and mother (Mother) within the geographical boundaries of District. Student’s primary language is English. 2. Student is currently in the fourth grade at Maple Creek Elementary School (Maple Creek) within District. Student attended a private school for preschool and kindergarten, and has attended school at Maple Creek in a general education classroom since beginning first grade. Student has a history of difficulties with work completion, following instructions, behavior, and social skills, all dating back to preschool. Student was diagnosed with ADHD in 2003, prior to beginning first grade at Maple Creek, and over the years he has taken a number of medications prescribed for that condition. Student, who was a young first grader with a September birthday, was retained one year in first grade because of lack of maturity, behavior problems, and problems with writing, reading, spelling, and fine motor skills. 3. In January and February 2005, District assessed Student for eligibility for special education and related services.5 School psychologist Ann Anderson conducted a psychoeducational evaluation of Student and prepared a written report, dated January 27, 2005, and resource specialist teacher Terri Weigand prepared an educational evaluation on January 24, 2005. Ms. Anderson assessed Student in the areas of cognitive development, social and emotional development, and motor and perceptual development. Ms. Anderson administered a number of instruments, including the Developmental Test of Visual-Motor Integration (VMI), which is a norm-referenced, standardized measure of visual-motor integration that requires a child to copy a set of progressively more complex geometric designs. Ms. Anderson administered that Wechsler Intelligence Scale for Children: Fourth Edition (WISC-IV), which is a norm-referenced, standardized measure of intelligence for children that yields an intelligence quotient (IQ) with a mean of 100 and a standard deviation of 15. Ms. Anderson also administered the Behavior Assessment Scale for Children (BASC), which is a norm-referenced, standardized behavioral assessment system designed to facilitate the differential diagnosis and classification of a variety of emotional and behavioral disorders of children and to aid in the design of treatment plans. Mother, Student’s teacher, and Student completed rating forms for the BASC. Ms. Anderson also administered the ADDESSecond Edition (ADDESS), which is an instrument used to report the presence of symptoms related to ADHD, because of concern regarding hyperactivity and impulsive behaviors. Both Mother and Student’s teacher completed rating scales for the ADDESS. Ms. Anderson also administered the SPELT-II, which is an informal language screening. Ms. Anderson also referred previous referral information, observed Student, and reviewed report cards. Ms. Weigand administered the Woodcock Johnson III Tests of Achievement (WJTA), along with several other tests. School nurse Sue Holmen conducted a health assessment. 4. On February 22, 2005, District convened an initial individualized education program (IEP) team meeting regarding Student. The IEP team discussed the assessments and reviewed Student’s academic achievement. The IEP team determined that Student was not eligible for special education and related services because he was learning and making progress in the regular education classroom. In addition, Student’s teacher was implementing various strategies to increase Student’s writing, such as the use of a dictionary and having Student create his own dictionary, and these strategies were working. Student’s teacher was also working with Student to improve compliance with writing assignments, and Student was practicing more and improving. The strategies that were being implemented to improve behavior, such as a positive behavior contract, were working as well. 5. Also on February 22, 2005, immediately following the IEP team meeting, District convened a student study team (SST) meeting to determine whether Student required accommodations. Because of Student’s distractibility, organizational skill weakness, classroom behaviors, and poor work completion, District developed an initial accommodation plan for Student pursuant to Section 504. That Section 504 plan addressed areas of need, including attention, focus, organization, work completion, behavior, and test-taking. Did District have a child-find obligation to assess Student in the areas of visual-motor integration, writing, working memory, social/emotional functioning, and behavior, from June 20, 2005, through June 20, 2007? 6. Student contends that, from June 20, 2005, through June 20, 2007, District had a child-find obligation to identify, locate, and assess him in areas of suspected disability, including visual-motor integration, writing, working memory, social/emotional functioning, and behavior, and that, with the exception of the May/June 2006 assessment, it failed to do so. 7. The Individuals with Disabilities Education Act (IDEA) and state law impose upon each school district the duty to actively and systematically identify, locate, and assess all children with disabilities or exceptional needs who require special education and related services. This statutory obligation is often referred to as the “child find” obligation. The child find obligation also applies to children who are suspected of having a disability and of being in need of special education even though they may be advancing from grade level to grade level. A district’s child find obligation toward a specific student is triggered when there is reason to suspect a disability and that special education services may be needed to address that disability. 8. Student began second grade in the 2005-2006 school year with his February 2005 Section 504 plan, described above, in place. On October 11, 2005, District convened an SST meeting to review Student’s Section 504 plan. The areas of concern at that meeting were work completion, attention and focus, and written work. Ms. Lisa Bath, the school psychologist who was responsible for implementing Student’s Section 504 plan, testified credibly that the Section 504 team discussed the fact that Student did not like to write, that he struggled with writing, that it was difficult for him, and that he did not believe that he was good at it. Handwriting was difficult for Student and required additional focus. It appeared to Ms. Bath at the time that handwriting was something that Student did not like to do. The October 11, 2005, Section 504 plan provided for a reduced amount of repetitive drill work or shortened assignment length, and incentives for work completion. According to the notes from that meeting, Student was self-monitoring his work and would check in with his teacher as he completed assignments. He was taking home the work he needed to finish in class. According to Student’s teacher, he was completing about half his class work during class in October 2005. Also, District staff suggested the “Type to Learn” program for Student to use at home to learn to type because of his difficulties with writing. 9. By the middle of his second grade year, Student began having behavior problems at school, such as aggression toward other students and tantruming in class. Student was disciplined for various infractions 24 times during the 2005-2006 school year. Student had 25 referrals to the office and/or “accountability checks”6 from October 2005 through April 2006, and approximately 16 of these 25 were based on Student’s failure to complete class work. The purpose of the school’s accountability program is to teach children to make appropriate choices, with the goal of extinguishing negative behaviors and replacing them with positive ones. Student’s parents felt that District was holding Student accountable for his disability, and that District did not care about his disability. They believed Student should be held accountable only for dangerous conduct or behaviors. 10. A follow-up SST meeting was held on February 28, 2006. In November 2005, Dr. Howard Glidden had conducted a neuropsychological evaluation of Student based on a referral from Student’s primary care physician, and had diagnosed Student with ADHD and developmental coordination disorder. Student’s parents provided Dr. Glidden’s report, along with his recommendations, to District in February 2006 at the Section 504 team meeting. As a result of that meeting, a behavior support plan (BSP) was developed and implemented to address Student’s aggressive behaviors and work completion difficulties. The BSP states that Student’s behaviors were interfering with his learning. Student was completing some of his work in class, and the rest was being completed at home. 11. Student’s parents believed, and expressed to District personnel at the February 2006 Section 504 meeting, that Student’s problems with work completion and behavior were the result of his ADHD, and not the result of behavior disorder. Student’s parents were concerned about the number of hours Student was spending to complete his class work at home, and they believed he was being punished for his inability to complete his work in class and, therefore, he was being punished for his disability. Student’s parents believed that the responsibility of educating Student had been shifted to them, and that Student needed something more than he was getting at school because he was unable to complete his work at school and had to bring it home. Mother estimated that Student was completing only 10 to 20 percent of his work in class, and the rest was coming home. According to Mother, the only reason Student’s grades were good was because she and Father were educating Student and providing one-to-one assistance on Student’s class work at home. 12. School psychologist Ms. Bath was sufficiently concerned about Student’s escalating aggressive behavior and class work completion problems, as discussed at the February 28, 2006, Section 504 meeting, that she contacted Mother by telephone in March 2006 and suggested that Student might qualify for special education under the category of emotional disturbance (ED). Mother became upset and denied that Student had an emotional disturbance. District prepared an assessment plan for Student in April 2006, and Student’s parents signed that assessment plan. Student’s contention for purposes of this issue is that Student should have been assessed in areas of suspected disability, including visual-motor integration, writing, working memory, social/emotional functioning, and behavior prior to that time. 13. Student’s contention that District failed to assess him in the area of visual-motor integration is based on the argument that visual-motor integration was an area of suspected disability, and that it rendered Student unable to complete his written work at school, and that District failed to refer him for assessment in this area until April 2006. 14. District had assessed Student in the area of visual motor-integration in January 2005, as discussed in Factual Finding 3. Ms. Anderson, the school psychologist who administered the psychoeducational evaluation in January 2005, determined that Student scored in the below average range in that area with a standard score of 79, which placed him in the sixth percentile when compared to other children his age. This indicates that his visual-motor integration and fine motor skills were delayed, and that he could experience problems writing as quickly or neatly as other students, or problems copying from the board. However, Ms. Anderson testified credibly that Student’s problems with writing were behaviorally based. Student had good ideas for writing, but he refused to write and needed prompting start writing. Student insisted on writing and spelling perfectly, he wanted his written work to be perfect, and he wanted individual help with spelling in order to write. Student’s first grade teacher, Ms. Linn, established that Student was capable of writing when he was in her class in the 2004-2005 school year. In February 2005, Student’s IEP team determined that Student did not meet the eligibility criteria for special education. 15. Ms. Lori Kuipers, a teacher with more than a decade of experience in teaching first and second grade, was Student’s second grade teacher from the beginning of second grade in the fall of 2005 through the end of April 2006. Ms. Kuipers was well aware of Student’s work completion problems, particularly with writing. Student completed little written work in class, and Ms. Kuipers sent him to the office many times as a result of his work completion problems. However, she testified credibly that when Student wanted to write, his writing was on grade level. Ms. Kuipers conducted informal assessments of students in her class in the areas spelling and language, including punctuation and grammar, and Student tested above grade level on these tests. Ms. Kuipers did not have a problem getting Student to complete these informal assessments. At the beginning of the school year, Student was writing in his journal in class. Testimony of Ms. Kuipers established that when Student liked what he was working on, he could write on grade level, form complete sentences with capitalization and punctuation, and convey a story and narrative. Testimony of Ms. Kuipers also established that Student’s failure to complete work was not caused by a lack of ability on Student’s part but, rather, was due to his choice not to complete the work. In addition, Student was making academic progress that year, and earning A and B grades. 16. Also, Dr. Howard Glidden, a private psychologist retained by Student’s parents to conduct a neuropsychological evaluation of Student, administered the Beery Visual Motor Integration (5th Edition) (Beery) to Student on November 16, 2005. Student scored in the 36th percentile on that administration of the Beery. Dr. Glidden also administered the Bender Visual-Gestalt Test (Bender) to Student on November 16, 2005, and Student obtained a standard score of 85 on that measure. That score was within the average range. Student’s parents gave his report to District at the SST meeting on February 28, 2006. 17. Based on the foregoing, District had no reason to suspect that Student had a disability in the area of visual-motor integration and that he might need special education prior to the early months of 2006, and it referred Student for assessment in April 2006. 18. Student’s contention that District failed to assess him in the area of writing is based on the argument that writing was an area of suspected disability because Student could not complete written work. Student contends District did not refer him for assessment until April 2006. 19. As discussed above in Factual Finding 3, District assessed Student in the area of writing in January 2005 as part of District’s initial assessment of Student. Based on the academic assessment conducted by Ms. Weigand, it was determined that Student’s writing was on grade level at the time. Student’s first grade teacher, Ms. Linn, established that his writing was on grade level when he wanted to write. When Student began second grade in the fall, Ms. Kuipers informally assessed him in the area of writing and determined he was on grade level. Ms. Kuipers was aware that Student would not write sometimes, and that he had difficulties with completion of written work. However, she testified credibly that when Student liked what he was working on, he could write on grade level and form complete sentences with capitalization and punctuation. He could convey a story and narrative. Ms. Kuipers testified credibly, and based on her experience, that Student wrote when he wanted to. Student was achieving A and B grades that year. Until the early months of 2006, District had no reason to suspect that Student had a disability in the area of writing and needed special education, and it referred District for assessment in April 2006. 20. Student’s contention that District failed to assess him in the area of working memory is based on the argument that working memory was an area of suspected disability, that District did not refer Student for assessment until April 2006. 21. District’s January 2005 assessment of Student, discussed above in Factual Finding 3, included the administration of the WISC-IV, and assessed Student in the area of working memory. Student received a standard score of 83, which is in the 13th percentile, and in the low average range. Student’s IEP team determined in February 2005 that he was not eligible for special education. Dr. Glidden administered the WISC-IV to assess Student in the area of working memory in November 2005. Student received a standard score of 88, which is in the 21st percentile. Student’s parents gave this report to District at the February 28, 2006, SST meeting. 22. As discussed above, Student was progressing academically in the second grade. He was achieving A and B grades and achieving at or above grade level in all areas. District had no reason to suspect that Student had a disability in the area of working memory and that needed special education in that area, and it had no obligation to assess Student in that area. 23. Student’s contention that District failed to assess him in the area of social/emotional functioning is based on the argument that this was an area of suspected disability, that District did not refer Student for assessment until April 2006. 24. As discussed previously in Factual Finding 3, District assessed Student in January 2005, and this assessment included assessment in the area of social/emotional functioning, based in part on Ms. Anderson’s administration of the BASC. It was determined as a result of that assessment that Student demonstrated behavior patterns in the classroom setting that placed him in the at-risk7 range for adaptive skills, which include difficulties with adaptability, social skills, leadership, and study skills. Student was also determined to be in the at-risk range on the depression and hyperactivity scales. Student was, at the time, taking Medidate, a medication prescribed for ADHD. In the classroom, Student was constantly in motion, had difficulty staying on task, and had rapidly fluctuating moods. Student would often sit and do nothing and fail to complete tasks if he did not have frequent monitoring and supervision by the teacher. Student had difficulty with transitions because of his need to maintain his backpack contents, desk contents, and supplies in a very precise order, which caused Student to lag behind. Student’s IEP team determined in February 2005 that he was ineligible for special education. 26. Student began the second grade in the fall of 2005, and made academic progress, as established by Ms. Kuipers’ testimony, discussed above in Factual Finding 19. While Student exhibited negative behaviors in terms of failing to complete class work, he was not exhibiting aggressive behaviors toward other children. In the middle of the school year, Student began refusing to do work in class on some days, and he began hitting, tripping, and throwing things at other children in the classroom and on the playground. Prior to that time, and based on the fact that Student had been assessed in the area of social/emotional functioning in January 2005 and that Student’s February 2005 IEP team determined that Student was not eligible for special education, District had no reason to suspect that Student had a disability in the area of social/emotional functioning and that he may need special education. When Ms. Bath, who engaged in ongoing communication with Ms. Kuipers, Student’s teacher, learned of Student’s aggressive behaviors toward other children and that Student’s work completion difficulties were worsening, Ms. Bath met with Student’s parents at the February 28, 2006, Section 504 meeting, and she began initiating the referral process for Student to be evaluated. District’s April 2006 referral of Student for evaluation was made within a reasonable time. 27. Based on the foregoing, District had no reason to suspect Student had a disability in the area of social/emotional functioning and may have needed special education prior to the time that it referred Student for assessment in that area. 28. Student’s contention that District failed to assess him in the area of behavior is based on the argument that behavior, including aggressive behaviors and failure to complete work, was an area of suspected disability because Student was not completing work in class and was exhibiting inappropriate behaviors. Student further contends that District did not refer Student for assessment until April 2006. 29. As discussed above in Factual Finding 3, District assessed Student in January 2005. That assessment included assessment in the area of social/emotional functioning, as discussed above. During the 2005-2006 school year, Student’s Section 504 plan was in place, Student was completing some of his work, and he was not behaving aggressively toward other students. As discussed in Factual Finding 19, Student was progressing academically. In the middle of the school year, Student’s problems with work completion escalated and Student began hitting, kicking, and throwing things at other children in the classroom and on the playground. When Ms. Bath learned of these behaviors, she met with the parents at a Section 504 plan meeting and initiated the assessment process. Prior to that time, and based on Student’s previous assessment, District had no reason to suspect that Student had a disability in the area of behavior and that he may need special education. 30. After District completed its May 2006 assessment, Student’s IEP team met on June 6, 2006, and determined that Student was not eligible for special education. Student’s parents disagreed with that determination. During the initial months of the 2006-2007 school year, Student continued to experience the same problems he had experienced with work completion. Due to Student’s parents’ concerns regarding his progress and their belief that he was eligible for special education as a result of his ADHD, District recommended that an addendum to the previous assessment be conducted because District had assessed Student just four months prior to that time, in May 2006. 31. On October 19, 2006, District prepared an assessment plan to update its May 2006 assessment by conducting a health and development assessment, conducting an assessment of academic/preacademic achievement by reviewing previous assessments and updating as needed, and conducting a classroom observation. District agreed to bring in a behavior specialist as part of the assessment to help guide them in increasing Student’s work productivity because it had already been determined that Student was capable of doing the work he was being asked to do. Therefore, the assessment plan specified “Behavior Specialist” under the list of “qualified professional[s] responsible for the administration and interpretation of the assessment,” and the notation next to the words “Behavior Specialist” indicate that “KW”, or Kathy Wandler, would be the behavior specialist, and that she would conduct a classroom observation. The assessment plan did not specify that an assessment report would be completed by the behavior specialist. Parents signed the assessment plan on October 26, 2006, consenting to District conducting the assessment described in the assessment plan. 32. District completed its addendum to the May 2006 assessment on December 6, 2006. The psychoeducational assessment and academic assessment were updated because both had been completed within the previous six months. Ms. Bath and Ms. Weigand each observed Student in the classroom. No assessment instruments were administered by either Ms. Bath or Ms. Weigand. Ms. Wandler, who is a board certified behavior analyst employed by District for nine years, conducted a behavior analysis as part of the assessment and prepared a BSP. 33. Student contends that District failed to meet its child find obligation with respect to Student in the 2006-2007 school year, and did not assess him in areas of suspected disability, including writing, visual-motor integration, working memory, social/emotional, and behavior. As discussed in Factual Finding 7, a district’s child find obligation toward a specific student is triggered when there is reason to suspect a disability and that special education services may be needed to address that disability. 34. The assessment plan signed by parents on October 26, 2006, did not include assessment in the area of motor/perceptual development, and did not specify that an occupational therapist would be among the list of “qualified professional[s] responsible for the administration and interpretation of the assessment.” In addition, the assessment plan did not include assessment in the area of cognitive functioning, which could have included assessment in the area of pscychomotor functioning and visual-motor integration. District did not assess Student in the area of visual-motor integration during the 2006-2007 school year. 35. Given that the assessment plan signed by parents did not include assessment in the area of visual-motor integration, the issue for resolution is whether District had a reason to suspect that Student had a disability in the area of visual-motor integration and may have needed special education for that disability during the 2006-2007 school year. 36. As discussed above, District assessed Student in the area of visual-motor integration in May 2006, and Student’s IEP team determined in June 2006 that Student was not eligible for special education. Ms. Wandler developed a BSP for Student as part of the December 2006 assessment, and Student’s parents consented to the implementation of that BSP in February 2006. The BSP, which was developed based on approximately 30 hours of classroom observation of Student by Ms. Wandler, was designed to motivate Student to begin his assignments, and to give him the opportunity to make positive choices. Ms. Wandler worked directly with Mr. Kliewer to set up the plan and to make sure it was working. Ms. Wandler spent approximately 50 hours in the classroom implementing the BSP. Ms. Wandler testified credibly that the BSP was working. Ms. Wandler observed that Student responded to the BSP, and she observed Student write more. Student actually showed off his writing to Ms. Wandler. He began engaging and following directions. Ms. Wandler testified credibly that Student was capable of doing his work. 37. Mr. Kliewer, Student’s third grade teacher in the 2006-2007 school year, and a teacher employed by Clovis Unified for 18 years, testified credibly that while it was difficult for him to motivate Student to produce writing, Student is capable of writing. Student would tell Mr. Kliewer that that he did not want to write and did not like writing. Some days Student would complete his work, and other days he would not. Also, Student would write sentences in his journal in Mr. Kliewer’s class about Pokemon. Ms. Kliewer established that, while Student did not want to write, he is capable of writing. 38. Consistent with Mr. Kliewer’s testimony, the evidence established that when Student was sent to the office for disciplinary matters and was asked to write a letter of apology to another student he had hurt or an incident report, he was able to do so without any assistance. He used good handwriting, wrote multiple sentences, indented properly, and used adverbs. In the incident reports, Student described what happened during the incidents. Ms. Myrna Powers, the guidance instructional specialist at Maple Creek who is responsible for the discipline program at the school, testified credibly that she saw Student produce these works of writing. Ms. Powers testified credibly that Student loved “campus club,” the school’s after school program. At 3:00 p.m. one day when Student was in her office, Student said it was time to go to campus club. Ms. Powers told Student that he could go when he completed his work. He completed his assignments in 20 minutes that day, and there was a lot of work. Three days later, it happened again. Student completed his all of his assignments that day in 35 minutes. Ms. Powers checked the work and it was done accurately. Student had no work to take home that night. Ms. Powers holds a clear clinical rehabilitation credential for speech and language pathology and an administrative services credential, and she also holds a master’s degree with an emphasis in communicative disorders. Her testimony that Student could write was credible. 39. Based on the foregoing, District had no reason to suspect that Student had a disability in the area of visual-motor integration during the 2006-2007 school year. This determination is consistent with the findings of Dr. Aholu, who was retained by Student’s parents to assess Student again in September 2007. Dr. Aholu conducted a parent interview and clinical observations as part of this assessment. She administered the BOT, the Beery, and the Sensory Profile, which is a caregiver questionnaire. On the Beery, Student was two months behind age level. His score was in the average range. On the BOT, Student was average or above in all areas except for fine motor integration and upper limb coordination. He was below average in both of those areas. Fine motor integration measures skill reproducing figures and shapes, which requires the ability to integrate visual stimuli with motor control skills. Upper limb coordination measures visual tracking with coordinated arm and hand movement. Student’s visual-motor integration and visual-motor control skills were in the average range. When Student used paper with lines designed by a teacher, his handwriting was well spaced and well written. She asked him to write fast, and it was still legible and well written. He did not report being in pain while writing. 40. During Dr. Aholu’s September 2007 assessment, as with previous assessments by Dr. Aholu and Ms. Dolin, Student again wanted to do things his own way, and not in the way Dr. Aholu wanted him to do them, and he actually said he wanted to do them his own way. This could affect his scores and skew the scores downward on a standardized test. For example, Student scored in the average range on upper limb coordination during the 2005 testing, and in the below average range in the same area in 2007. He refused to complete some tasks that he had completed when she assessed him two years earlier. In Dr. Aholu’s opinion, Student was capable. He was unwilling, not incapable. 41. Based on the foregoing, District had no reason to suspect that Student had a disability in the area of visual-motor integration and that he may need special education for that disability. 43. Ms. Weigand was responsible for the academic assessment portion. Ms. Weigand conducted a classroom observation and spoke with Student’s teacher. She determined that it was not necessary to administer the WJTA again because she had just administered it in May 2006. Ms. Weigand was of the opinion that Student was capable of doing grade-level or above grade-level work, and he was making “great” academic progress in the general education classroom in every academic area, and that he can access the curriculum. 44. As discussed above, in Factual Findings 19 and 34 through 41, Student was capable of writing that year, and his academic performance was at or above grade level. On the spring 2007 administration of the STAR, Student again scored in the proficient range in English/language arts and in the advanced range in math. Student earned mostly A and B grades that year, and received no grade that was below a C. Writing was a part of Student’s language arts grade, and Student was earning good grades in language arts. While Student did not complete most of his writing assignments in class, his failure to complete was not based on an inability to do so. It was his choice. District was working with Student through the use of a BSP to motivate Student to write more, and the BSP was working. Ms. Wandler testified credibly that the BSP could be modified to work even better. 45. Based on the foregoing, District had no reason to suspect that Student had a disability in the area of writing and that he may need special education for that disability. 46. The assessment plan signed by parents on October 26, 2006, did not include assessment in the area of cognitive functioning, within which working memory would be included if it were an area of suspected disability. 47. Given that the assessment plan signed by parents did not include assessment in the area of working memory, the issue for resolution is whether District had a reason to suspect that Student had a disability in the area of working memory and needed special education during the 2006-2007 school year. 48. There was no testimony or documentary evidence establishing that working memory was an area of suspected disability in the 2006-2007 school year. Pursuant to Dr. Patterson’s administration of the WJ-III working memory subtests in August 2007, Student was “performing well into the average range” in the area of working memory. Student was performing at the 57th percentile in that area, yet his full-scale IQ was 101, according to Dr. Patterson, which was in the average range. 49. Based on the foregoing, District had no reason to suspect that Student had a disability in the area of working memory and that he may be in need of special education for such a disability. 50. The assessment plan signed by parents on October 26, 2006, did not include assessment in the area of personal, social, and emotional development. The assessment plan did, however, include classroom “observation” by a behavior specialist, Ms. Wandler. Although the assessment plan did not require a written report by Ms. Wandler, she prepared a BSP for Student. 51. Given that the assessment plan signed by parents did not include assessment in the area of social/emotional functioning or behavior, the issue for resolution is whether District had a reason to suspect that Student had a disability in the area of social/emotional functioning or behavior and needed special education during the 2006-2007 school year. 52. As discussed above, Student had been assessed in the area of social/emotional functioning and behavior in May 2006, and in June 2006 the IEP team determined that he was not eligible for special education. Mother reported to the school nurse on May 25, 2006, that Student was doing much better since he changed classrooms. Student’s aggressive behaviors that had been a problem in the spring of 2006 were not present after he moved from Ms. Kuipers’ to Ms. Sutton’s class in April 2006, and Mr. Kliewer established that Student’s aggressive behaviors were not present in during the beginning of the 2006-2007 school year. 53. Prior to commencing her classroom observations of Student, Ms. Wandler interviewed Ms. Bath and Student’s teacher, who conveyed that Student was not doing work in class, and that if he started doing a task, he did not maintain the work momentum and complete the task. Ms. Wandler conducted approximately 30 hours of observation of Student in class from October through December 5, 2006. She obtained data and identified target behaviors, and those became the baseline. Ms. Wandler prepared a BSP that paired rewards, in the form of reinforcements and social praise, and punishments because, according to Ms. Wandler, research shows clearly that pairing punishment with a reinforcer is much more effective in children with ADHD than using one of the two alone. 54. Ms. Wandler determined that Student, by employing his behaviors, was escaping task demands, obtaining or accessing contingent school privileges, and avoiding task demands. Student would refuse to do work in class and say that he would do the work at home. He was not only able to escape doing the task, but he was also obtaining adult attention. Student was not required to put forth effort to get privileges such as recess and field trips, and there was no reason for him to change his behavior if he did not have consequences. Student’s noncompliance was actually being reinforced at school. However, Ms. Wandler determined that Student had skills of alternative behavior in his behavior repertoire. Student was upset when he lost an accountability check earlier in the year, and asked if he could earn it back. The purpose of the BSP is to change consequences so that Student learns to function in life. 55. Ms. Wandler prepared a BSP and presented it at the Demember 6, 2006, IEP meeting. According to the IEP notes, Student was having the most difficulty in the area of writing, and especially with lengthy assignments, although the focus of discussion at the meeting was on behavior. The IEP team determined that Student was ineligible for special education. District convened a 504 meeting after the IEP meeting, and the Section 504 team determined that Student’s problems with work completion and completing written assignments was due to noncompliance with a nonpreferred activity, and that the BSP proposed by Ms. Wandler would target those behaviors. Parents did not consent to the implementation of the BSP in December 2006 because they still believed that the BSP would punish Student for his ADHD, and that Student was eligible for special education as OHI or SLD. While Student’s parents believed his behavioral problems resulted from ADHD, Ms. Wandler’s opinion, which was credible, was that they did not. Ms. Wandler conducted extensive observation of Student in his classroom, and she observed that Student did not engage in some assignments from the beginning, while many children with ADHD will engage and then become distracted. 56. In January 2007, without a BSP in place, Student began to exhibit behaviors again. District convened a Section 504 team meeting on February 16, 2007, and Student’s parents consented to the implementation of the BSP. Ms. Wandler worked with Mr. Kliewer and Student in the classroom for several weeks to implement the BSP. 57. On March 30, 2007, the 504 team reviewed the new BSP, and determined that the BSP was working and was increasing Student’s rate of work completion, although Student was still struggling with work completion and following directions, and that he was off-task and appeared to be not engaged. In addition, there were still inconsistencies in his day-to-day performance. Ms. Wandler continued to work extensively with Mr. Kliewer until April when Student’s alternative behaviors were in place and Student was complying more frequently. At that point, Ms. Wandler “faded out.” According to Ms. Wandler, the BSP was working. 58. Ms. Wandler’s testimony was credible. Ms. Wandler holds a bachelor of science in child development, a master of arts in education and psychology (combined), a master of science in special education with an emphasis in autism, a special education teaching credential, an applied behavior analysis certificate. Ms. Wandler has years of experience as a behaviorist and working with children with disabilities. She is currently an adjunct faculty member at California State University, Fresno, and teaches a class she designed for general education teachers to learn to classroom management, behavior management, and how teach students with special needs in the inclusive setting classroom. In addition, Ms. Wandler consults with other districts and with families as part of her private practice. At District, Ms. Wander assists teachers in implementing behavioral or instructional strategies. She is called in when a student is displaying classroom behaviors that are impeding his or her own learning or that of others. She identifies triggers and produces research-based interventions. It is part of her job to develop BSPs for children with ADHD. She researches appropriate and successful interventions for students with that diagnosis. 59. In addition to Ms. Wandler’s classroom behavior observations, Ms. Bath prepared a December 2006 addendum to her June 2006 report, and Ms. Weigand prepared an update to her May 2006 assessment. Both conducted classroom observations of Student. Ms. Bath noted in the update of her psychoeducational evaluation that a BSP had been developed to address Student’s failure to begin and complete assignments in the classroom. 60. Overall, during the 2006-2007 school year, Student’s behavior improved, and he was disciplined only eight times that year, as compared to 24 times the previous year. Student had one good friend in the class, and his social behavior was “okay” in the class. However, in May 2007, the Section 504 team met again because Student was having behavior difficulties, such as tripping or poking others. 61. In May 2007, however, after District had conducted extensive observations of Student, prepared a BSP, and invested dozens of hours of Ms. Wandler’s time implementing the BSP, Student’s work completion improved only modestly. Just after Ms. Wandler faded out in late April 2007, Student’s aggressive behaviors toward other children returned. Ms. Wandler testified credibly that Student behaved better when she was in the classroom. He liked the attention he was getting from her. After Ms. Wandler faded out and Student’s aggressive behaviors returned, District had reason to suspect that Student had a disability in the area of social/emotional functioning and behavior. However, District did not have reason to suspect that he may be in need of special education because, as Ms. Bath credibly testified, Student was achieving at or above grade level and earning good grades, he was progressing academically, he was learning, and he was benefiting from his education, despite his problems with behavior. Her opinion was based on her assessment of him, Ms. Weigand’s academic assessment of him, his grades, reports from teachers, and his scores on the STAR. The only indication that Student’s educational performance was suffering is that he would not perform many of his writing tasks in class and his behaviors resulted in him being sent out of the class at times. Ms. Bath testified that Student did not need special education, such as frequent repetition as would be provided in special education, in order to learn. Based on the testing and information provided by teachers, when Student learned something, he retained it. Student’s expert, Dr. Patterson, testified that Student had oppositional traits and wrote in his report that Student had “apparent oppositional defiant disorder.” Dr. Patterson conceded that it is fair to say that Student was learning in the general education environment. In light of all of District’s previous assessments, in January 2005 and in May and June 2006, and because District had no reason to suspect that Student required special education, it was not required to assess Student in the areas of social/emotional functioning and behavior. Did District fail to assess Student in areas of suspected disability, including visual-motor integration, writing, working memory, social/emotional functioning, and behavior, as part of its May/June 2006 assessment? 63. District referred Student for assessment in April 2006. The assessment plan, which was signed by Mother on April 27, 2006, established a plan to assess Student in the areas of health and development; motor/perceptual development; cognitive functioning, academic/preacademic achievement; and personal, social, and emotional development. The assessment plan also provided for a functional behavioral assessment, including observations and report, and classroom observations. 64. District’s May 2006 assessment of Student included a psychoeducational evaluation completed by school psychologist Lisa Bath on May 31, 2006; an occupational therapy assessment completed by District occupational therapist Erin Dolin, an academic assessment completed by District resource specialist Terri Weigand, and a health assessment completed by District school nurse Sue Holmen. 65. Student contends that District failed to assess Student in areas of suspected disability, including including visual-motor integration, writing, working memory, social/emotional functioning, and behavior, as part of this assessment. 66. During the middle of the 2005-2006 school year, Student’s difficulties with work completion worsened. Although Student had completed his work in class in the beginning of the school year, he completed less and less work in class over time. By mid-school year, Student was refusing to do work in class on some days. In April 2006, after Student’s difficulties with work completion escalated, District prepared an assessment plan, which Student’s parents signed on April 27, 2006. District completed its assessment in June 2006. Student contends that Ms. Bath failed to administer psychomotor testing instruments to Student as part of her assessment and, therefore, Student was not assessed in the areas of visual-motor integration. 67. Ms. Bath conducted a psychoeducational evaluation. However, she did not administer testing instruments to assess Student in the area of visual-motor integration because Student had been assessed in that area by several other assessors, and Ms. Bath had reviewed those assessment reports and she reported them in her assessment. For example, Ms. Anderson had administered the Developmental Test of Visual-Motor Integration (VMI) to Student on January 7, 2005. Student scored in the sixth percentile on that test. Dr. Glidden had administered the Beery and the Bender in November 2005, and Student scored in the average range, as discussed above. 68. In addition, District occupational therapist Erin Dolin conducted an occupational therapy assessment of Student on May 9 and June 5, 2006, pursuant to the assessment plan signed by Student’s parents on April 27, 2006. Ms. Dolin’s occupational therapy assessment was comprehensive and included the administration of several instruments to assess Student’s visual-motor integration, including the Test of Visual Motor Skills (TVMS-R), the Print Tool, the Bruininks-Osteretski Test of Motor Proficiency (BOT), and the Sensory Profile.10 In addition, Ms. Dolin’s assessment included a records review, including a review of the January 2005 psychoeducational evaluation by Ms. Anderson, the November 2005 neuropsychological evaluation by Dr. Glidden, and the August 2005 occupational therapy assessment by occupational therapist Dr. Dawn Aholu (privately retained by Student’s parents); an interview with Ms. Bath; an interview with Student’s two second grade teachers, including Ms. Kuipers and Ms. Sutton; an interview with Father; an observation of Student in the classroom; and a clinical observation of mechanical and neuromuscular functions. 69. In conducting her assessment of Student, Ms. Dolin was aware that Dr. Glidden had diagnosed Student with developmental coordination disorder and ADHD. She was aware of Student’s difficulties with writing and completing homework, and his history of behavioral difficulties. She was also aware that Student received a standard score of 79 on the VMI, which was administered by Ms. Anderson as part of District’s January 2005 assessment, and that score indicated that Student’s visual motor and fine motor skills were delayed and could present problems for him with writing. 70. In addition, at the time of her assessment, Ms. Dolin was aware of the findings of Dr. Dawn Aholu, an occupational therapist of three years with Children’s Hospital of Central California, who evaluated Student on August 24, 2005, based on a referral from Student’s physician. Dr. Aholu administered the BOT, the Beery Developmental Test of Visual-Motor Integration (Beery), and the Sensory Profile. In addition, Dr. Aholu conducted a parent interview and clinical observations. Ms. Dolin as aware that on Dr. Aholu’s administration of the BOT, Student performed close to age level. His scores were approximately one year and two months delayed in visual-motor control and upper limb speed and dexterity. On the Beery, Student scored at the six year, 11 month age level. Student’s scores in all areas tested by Dr. Aholu in August 2005 were in the average range. 71. Upon Ms. Dolin’s administration of the TVMS-R, which is a standardized, age-normed test for children that assesses visual-motor integration, Student scored within the average range. On her administration of the BOT, which is an age-normed assessment that measures motor performance in gross motor, fine motor, bilateral and upper limb coordination, memory, orientation, placement, size, sequence, control, and spacing, Student scored in the average range. Ms. Dolin also administered the Print Tool, which is a printing assessment that evaluates a student’s skills in the area of producing capital letters, lower case letters, and numbers. Although this assessment is not standardized and normed, but it is a useful tool for evaluating writing. Student’s overall score on this test was 75 percent accuracy, and the suggested overall score of a child over eight years of age is 95 percent. Student’s score showed some inconsistency in sizing of letters and difficulty in control. Although Student’s score of 75 was low, his score was affected by his motivation to participate in testing. For example, on the first day of assessment, Student wanted his capital letters big, and made them that way. On the second day, his ability level was different, and he did neat and appropriate work on the Print Tool. The letters he wrote were the appropriate size, were refined, and Student had more control. He demonstrated better quality with the attempt on the second day of assessment, and demonstrated the ability to write with the size equivalency of a second to fourth grade student even though he wrote somewhat quickly. Student told Ms. Dolin that he did not like to write. 73. Ms. Dolin was a credible witness who was both knowledgeable and experienced in the field of occupational therapy. She has been an occupational therapist for eight years, and has been employed by District since 2003. She holds a bachelor’s degree in exercise physiology and a master’s degree in occupational therapy. She works on a per diem basis for Valley Children’s Hospital, and also teaches several classes in the physical therapy department at California State University, Fresno, and provides in-service training for teachers in District. She is also employed as an occupational therapist by the Fresno County Office of Education. Her conclusions regarding Student were consistent with those of the other assessors and were credible. 75. District assessed Student in the area of writing in May and June 2006. Student contends that District based its conclusions only on the Woodcock Johnson III Tests of Achievement (WJTA) written language subtests. Dr. Robert Patterson, Student’s expert, who assessed Student on August 24, 2007, testified that the WJTA test is not a “broad field writing sample but is a very limited writing sample and provides only the concept that [Student] is able to write short, brief sentences. . ..” According to Dr. Patterson, “he is unable to write a story. . ..” In addition, Student contended, based on Dr. Patterson’s testimony, that Student could not sustain writing effort, and the WJTA did not assess Student’s ability in this area. Therefore, Student contends, he was not assessed in the area of writing. 76. In the middle of the 2005-2006 school year, Student’s difficulties with work completion and willingness to write began to worsen. Although Student completed his work in class in the beginning of the school year, he completed less and less over time. In the spring of 2006, Student was refusing to do work in class on some days. The assessment plan developed by District in April 2006 proposed to assessment Student in the area of academic functioning, which included writing. 77. Ms. Weigand, who has been a resource specialist for 20 years, conducted an academic assessment on May 16, 2006, as part of District’s assessment of Student. Ms. Weigand holds a bachelor of science in child development, a learning handicapped credential, and a resource specialist credential. She has assessed 200 to 300 students. She has taught and tested many children with ADHD, and is very familiar with the symptoms of that disorder. She administered the WJTA, some subtests of the Brigance Inventory of Basic Skills, and the Sivaroli Reading Inventory, and also conducted a classroom observation of Student and interviewed his teachers. Student received a standard score of 104 on broad written language pursuant to her administration of the WJTA, which is at the beginning third grade level. Student was then in second grade. 78. Ms. Weigand testified credibly that although an assessor is not required to obtain a separate writing sample pursuant to an administration of the WJTA, she wanted to see Student’s writing because she was uncertain as to whether Student was unable to produce written work, or if he was simply choosing not to produce written work. Ms. Weigand attempted to get a writing sample from Student during her assessment of him, but she was unable get him to produce one. She asked Ms. Bath to attempt to obtain one. During Ms. Bath’s assessment of Student, she was able to get Student to produce a writing sample by asking him to write about something he liked, Pokemon, and by offering Student a reward if he wrote three paragraphs. Student independently produced a three-paragraph, nine-sentence writing sample describing his favorite Pokemon character. All nine sentences began with a capital letter and had a period at the end, except for one. Student used proper punctuation and capital letters, the words he wrote were separated, and he wrote on the lines. Student grouped ideas together, stayed on topic, wrote in complete sentences, used words in the correct order, and he placed nouns, verbs, objects and adjectives in correct places. In addition, the writing in the sample was legible. The writing was within the lines and there was good spacing between words. Student correctly spelled the words “because,” “water,” and “type,” which are irregular sight words which are words that cannot be sounded out. Student was able to spell short- and long-vowel words. In addition, Student used a conjunction to connect ideas in the writing sample, which is a skill that teachers begin concentrating on in third and fourth grades. Ms. Weigand and Ms. Kuipers, Student’s second grade teacher, established that the writing sample Student produced was at grade level. Before getting the writing sample, Ms. Bath had concerns about Student’s ability to write, but she had no concerns about his ability to write after he wrote the same because he wrote it without any help. Ms. Weigand testified credibly that Student has the ability to write. 80. Ms. Weigand, Ms. Dolin, and Dr. Aholu all had difficulty getting Student to write. Ms. Dolin and Dr. Aholu each testified credibly that Student wanted to disregard their instructions for completing the testing instruments, and “do it his way.” Consistent with Ms. Weigand’s, Ms. Dolin’s, and Dr. Aholu’s experiences with Student, Mother reported to school nurse Ms. Holmen during her interview on May 25, 2006, that Student did not like to write, that he was refusing to do his work, but that had changed in April 2006, when he switched classrooms, from Ms. Kuipers’ to Ms. Sutton’s class. 81. Student contends, based on Dr. Patterson’s testimony, that Student is unable to write a story and that he cannot sustain writing effort. While Dr. Patterson is qualified to render an opinion regarding assessment in the area of written language,13 cross-examination of Dr. Patterson revealed that he was not aware that District had obtained a writing sample from Student in May 2006, described above in Factual Finding 78, or that this writing sample was part of District’s assessment of Student in the area of writing. In addition, Dr. Patterson had not spoken with any of Student’s teachers, or with anyone who assessed Student previously, and Dr. Patterson had not observed Student in the classroom. Thus, Dr. Patterson had limited information on which to base his conclusions, and, therefore, Student’s contention based on Dr. Patterson’s testimony that District did not assess Student in the area of writing is not persuasive. 82. Based on the foregoing, District assessed Student in the area of writing. 83. District referred Student for assessment in April 2006, and the assessment plan included assessment in the area of cognitive functioning. Student contends that District’s May/June 2006 assessment of Student did not include cognitive testing, which would include the area of working memory. 84. Ms. Bath reported in her psychoeducational report that Student was functioning within the average to high average range of overall cognitive abilities and that he had relative weaknesses in several areas, including working memory, but she did not administer testing instruments to assess Student in that area. Rather, she utilized the cognitive testing completed by Dr. Glidden in November 2005 and the cognitive testing completed by Ms. Anderson in January 2005. Scores obtained by both Ms. Anderson and Dr. Glidden were in the low average range. 85. According to Ms. Bath, it would have been inappropriate to have conducted cognitive testing of Student again in May 2006 because Ms. Anderson administered the WISC-IV to Student in January 2005 and Dr. Glidden administered the WISC-IV in November 2005. Ms. Bath testified credibly that it is generally considered acceptable for a school psychologist, as part of an assessment, to use an evaluation done previously if the evaluation was done within the previous year, and that, typically, psychologists avoid using the same cognitive measure more than once within a year in order to avoid practice effect. The cognitive evaluation completed by Dr. Glidden was completed just six months prior to Ms. Bath’s assessment and showed strengths and weaknesses similar to those identified by Ms. Anderson in her February 2005 report. Although the scores Student achieved in Dr. Glidden’s administration of tests were higher than those he achieved in Ms. Anderson’s administration, perhaps due to the “practice effect,” the patterns of strengths and weaknesses were the same. 87. Based on the foregoing, District assessed Student in the area of working memory. 88. Student contends that when District assessed Student in May and June 2006, Ms. Bath did not interview Mother or Father in completing her psychoeducational assessment. In addition, Student contends that Ms. Bath’s failure to contact Student’s physician or psychologist for information, and her failure to include in her report information about what medication Student was taking to treat his ADHD the effects of that medication, establish that District did not assess Student in the area of social/emotional functioning. 89. Ms. Bath testified credibly that in March 2006, and based on Student’s aggressive behaviors toward other students and his escalating problems with work completion, she had reached the conclusion that Student might qualify for services under the category of emotional disturbance (ED). Based on her experience, Ms. Bath understood and was sensitive to the fact that it was difficult for a parent to hear that his or her child might qualify for special education under the category of ED. Ms. Bath called Mother in March 2006, and had that difficult discussion. Mother testified credibly that she was upset by the conversation and informed Ms. Bath that Student was not emotionally disturbed. 90. The assessment plan prepared for Student in April 2006 included assessment in the area of social/emotional functioning. Although Ms. Bath did not interview Mother or Father in May 2006 prior to completing her assessment, Ms. Bath administered the BASC in May 2006 to assess Student’s social and emotional needs. Student’s two teachers, Mother, and Student all completed BASC rating forms as part of the assessment. Student’s teachers rated him in the at-risk range in several areas, including hyperactivity and attention problems, learning problems, social skills, leadership, functional communication, and study skills. Student’s teachers rated him in the clinically significant15 range in atypicality, withdrawal, and adaptability. According to the BASC rating scales completed by Mother, however, Student was in the average range on overall behaviors that comprise the internalizing and externalizing problems composite, as well as in the average range in overall adaptive behaviors. Student’s parents rated him in the at-risk range only for hyperactivity, attention problems, and activities of daily living. Mother rated Student in the average range with regard to specific behaviors in all other areas. Ms. Bath included Mother’s BASC rating in her report. 91. Although Ms. Bath did not interview Mother or Father after the assessment plan was signed and, therefore, specifically in conjunction with the assessment plan, the evidence establishes that Ms. Bath knew Student very well at the time she conducted her May 2006 assessment. She had been responsible for implementing Student’s Section 504 plan since February 2005. She spoke with his teachers regularly, and was aware of his work completion difficulties and aggressive behaviors toward other students in the classroom. In addition, in the year and a half preceding Ms. Bath’s assessment of Student, Ms. Bath had many conversations with Student’s parents regarding Student. She had participated with Student’s parents in several Section 504 meetings since February 2005. The last Section 504 meetings Ms. Bath participated in with the parents prior to the May 2006 assessment were in February and April 2006. The testimony of Mother, the testimony of Ms. Bath, and the notes from those Section 504 meetings establish that Student’s parents informed Ms. Bath on an ongoing basis regarding Student’s difficulties in the area of social/emotional functioning and regarding the medications Student was prescribed for his ADHD. 92. In addition, Ms. Holmen, the school nurse, interviewed Mother by telephone on May 25, 2006, as part of District’s assessment of Student. Ms. Holmen’s interview with Mother including discussion regarding Student’s social functioning. The report also discussed Ms. Holmen’s physical observations regarding Student and medication taken by Student. The nurse’s report includes a three paragraph summary of her interview with Mother, which indicates that Mother reported that Student “cycles with respect toward others” and “has no trouble challenging authority.” Mother reported that Student had more social interaction this year than in the prior year. Mother believed that Student’s change of classroom and teachers had made a great difference, and that Student was not going to the office as much and was happier at school. Mother stated that Student is very smart with no academic difficulties, does not like to write and struggles with anything to do with writing. She stated that he was refusing to do work, but that had changed, she believed, since he had changed classrooms. 93. It was clear from Mother’s testimony and her statements to Ms. Bath and Ms. Holmen that Mother and Father did not want Student to be determined eligible for special education under the category of ED, and began to report inaccurate information to Ms. Bath. For example, Mother testified credibly that she told Ms. Bath during their March 2006 conversation that Student did not exhibit his behaviors across all environments, and that he was happy at home. However, Mother’s statement to Ms. Bath is inconsistent with her previous reports that Student tantrumed and threw himself on the floor at home, that he had various difficulties dating back to his birth, and Student’s difficulties with homework completion were a continuing “nightmare” for parents. It can be inferred from the evidence that Mother intentionally gave inaccurate information to Ms. Holmen and intentionally gave Student inaccurate ratings on the BASC in order to ensure that Student would not be found eligible under the category of ED. 94. In light of the circumstances, Ms. Bath’s reliance on her previous conversations with Student’s parents, her interview with the school nurse who had interviewed Student’s parents for purposes of the assessment, and her administration of the BASC to Mother, Student, and Student’s teachers, establish that District assessed Student in the area of social/emotional functioning. Ms. Bath’s failure to interview Mother after the assessment plan was signed does not change that fact, and Student offered no evidence that it did. 95. In addition, Student’s records, all of which Ms. Bath had reviewed, including Dr. Glidden’s report, Ms. Holmen’s health assessments, and SST meeting notes, all contain discussion regarding the medication Student was taking for his ADHD. Ms. Bath also interviewed Ms. Holmen regarding Ms. Holmen’s May 25, 2006, interview with Mother. Ms. Bath was aware that Student was prescribed medication for his ADHD, and that these medications may have side effects that affect behavior or functioning. The fact that Ms. Bath did not discuss in her report the side effects of the medication Student was taking and how that medication might affect him academically does not establish that District failed to assess Student in the area of social/emotional functioning. These areas were outside the expertise of school psychologist. 96. While Ms. Bath did not contact Dr. Glidden, she read his report and his recommendations. In addition, over the past year and a half, she had spoken with and participated in Section 504 meetings with Student’s parents and his teachers regarding Student’s social/emotional functioning. She had reviewed all reports relating to Student. The fact that Ms. Bath did not contact Dr. Glidden or any other doctor treating Student does not establish that District failed to assess Student in the area of social/emotional functioning. 97. Based on the foregoing, District assessed Student in the area of social/emotional functioning. 98. Student contends that when District assessed Student in May and June 2006, it did not assess him in the area of behavior. 99. In April 2006, District prepared an assessment plan that included assessment in the area of behavior and also a functional behavioral analysis. Pursuant to that plan, District assessed Student in May 2006. The evidence establishes that District assessed Student in the area of social/emotional functioning, as discussed above in Factual Findings 88 through 97. In addition, as part of her May 2006 assessment, Ms. Bath, who is also a certified behaviorist, conducted a functional behavior analysis of Student. Ms. Bath determined by talking with Student’s teachers that Student’s aggressive behaviors in the classroom, described above, and his problems with work completion were interfering with learning. Student had been in two second grade classes by May 2006. He was in Ms. Kuipers’s class until his Section 504 team determined in April 2006 that he needed a change in classroom placement to break his negative behavior cycle. At the time Ms. Bath commenced her functional behavioral analysis, Student had been in Ms. Sutton’s class for approximately two weeks. After Student moved to Ms. Sutton’s classroom, he no longer exhibited aggressive behaviors in the classroom, but his problem with work completion remained. Therefore, the target behaviors Ms. Bath identified in her functional assessment of behavior problems included Student’s failure to follow whole class directions, failure to follow individual directions, and off-task behavior. 100. Ms. Bath conducted classroom observations of Student in Ms. Sutton’s class, and she observed that Student was off task approximately 25 percent of the time, and that during independent seat work time he was off task approximately 55 percent of the time. While sitting on the floor and listening to a story or in the computer lab, he was off task none of the time. During story time, Student participated in the discussion. Student failed to follow whole class instructions 41 percent of the time and individual directions 42 percent of the time. According to Ms. Bath’s observations, Student was off-task 100 percent of the time when he was supposed to be doing writing tasks in class. Ms. Bath determined that the antecedents to Student’s off-task behaviors included transitions, being asked to stop a preferred activity, being asked to do any kind of writing, and being required to work along with the class under time constraints. If the teacher ignored him, he was more likely to comply. Telling him to comply caused more off-task behavior. Thus, Ms. Bath hypothesized that Student may be trying to avoid doing undesirable tasks, trying avoid being under specific time constraints to produce a product, or trying to continue a preferred activity or refrain from having to make a transition. Student was completing work, but not in class. If the work was something he did not want to do, he would not do it, and he was getting his work done at home. 101. Ms. Bath determined that these behaviors were interfering with the learning process because they disrupted the flow of class instruction, that Student was sometimes removed from class for disciplinary purposes, and that he was losing learning practice when he was not doing his work. 102. Ms. Bath did not always know why Student was not completing his work in class. Student would sometimes say that he did not want to do his work, or that it was “too hard.” Ms. Bath had a concern that Student’s unwillingness to write was “partly” something other than noncompliance, and that is why District initiated an evaluation. During the February 2006 and April 2006 Section 504 team meetings, Ms. Bath hypothesized that Student was not completing work because he did not want to. After evaluating him in June 2006, it was Ms. Bath’s opinion that Student was not completing work because it was something he did not want to do—in any area of academics. His writing sample, described above, showed that he was capable of completing written work, and Ms. Bath also saw some work Student completed in the classroom. 104. Based on the above, District assessed Student in the area of behavior, including aggressive behaviors and failure to complete class work. Did District deny Student a free appropriate public education by failing to find him eligible for special education and related services under the category of specific learning disability (SLD) or other health impairment (OHI) from June 20, 2005 through June 20, 2007? 107. The severe discrepancy method of determining SLD looks at whether a severe discrepancy exists between the child’s intellectual ability and his or her academic achievement. There are three factors to consider in determining whether a child has an SLD under this method: 1) Does a child have a disorder in one of the basic psychological processes, which include attention, visual processing, auditory processing, sensory-motor skills, cognitive abilities including association, conceptualization and expression; 2) Does a severe discrepancy exist (based on either a comparison of standardized tests or on other factors including observations); 3) Can the discrepancy be corrected through other regular or categorical services offered within the regular instructional program. 108. If standardized tests do not reveal a severe discrepancy between intellectual ability and achievement, the IEP team may still find that a severe discrepancy exists as a result of a disorder in a basic psychological process based on: 1) data obtained from standardized assessment instruments; 2) information provided by the parent; 3) information provided by the pupil’s present teacher; 4) evidence of the pupil’s performance in the regular and/or special education classroom obtained from observations, work samples, and group test scores; 5) consideration of the pupil’s age, particularly for young children; and 6) any additional relevant information. 109. If the Student has a disorder in one of the basic psychological processes, has a severe discrepancy between ability and achievement, and the discrepancy cannot be corrected through other regular or categorical services offered within the regular instructional program, a determination must then be made regarding whether, as a result of that SLD, the child needs special education. If the child does not need those services to make progress academically, he or she is not eligible for special education. Does Student Have a Disorder in One of the Basic Psychological Processes? 110. Dr. Patterson testified credibly that Student has a processing disorder in the area of attention. According to Dr. Patterson, Student’s attention waxes and wanes across time, and Student self distracts. Student cannot maintain and sustain attention for a lengthy period of time. Many of Student’s off-task behaviors are due to attention span difficulties and resulting frustration. Dr. Clare testified credibly, and consistently with Dr. Patterson, that Student has attention difficulties and impulsivity that result in deficits in attention. District concedes Student’s diagnosis of ADHD, and did not contend that Student’s attention difficulties were insufficient to constitute a processing disorder. 111. Dr. Patterson also testified that Student has processing speed disorders based his low scores on three subtests of the WJ-III, including rapid picture naming (scored in the first percentile), visual matching (scored in fourth percentile), and decision speed (scored in third percentile). Based on these scores, Dr. Patterson testified that Student has difficulty performing academic tasks because of difficulties with motor speed and accuracy, and that Student shuts down at times when he is not able to perform. Student performs well on a task if it is simple and straightforward, but as tasks require more time and more steps, Student is unable to complete them, he gets frustrated, and he shuts down. As Student gets older, his ability to process is not keeping up with that of his peers. While Student can write, Dr. Patterson concedes, he cannot write consistently over a lengthy period of time. Dr. Patterson testified that while he is able to write, Student “poops out” and stops writing and “implodes” because he does not like to write. He then becomes defiant. As a result, it is difficult for Student to initiate the task of writing, and he tries to avoid writing or tries other strategies to get out of writing. 112. However, Dr. Patterson testified that Student’s processing speed increased when he was on his medication for his ADHD. According to Dr. Patterson’s report, Student showed “a fairly large increase in his work simply due to the medication effect.” According to Dr. Patterson’s report, “there was a significant increase in the actual motor speed on medication, indicating better processing, more attentivity.” Student took his medication, Concerta, immediately before Dr. Patterson administered the WJ-III. Dr. Patterson testified that he administered the WJ-III subtests in the order they are listed in his report, and administered the entire WJ-III within 1.5 hours. Out of the 19 subtests of the WJ-III, the decision speed subtest was administered sixth, the rapid picture naming subtests was administered eighth, visual matching was administered 14th. Dr. Susan Clare testified credibly that it takes an hour for Concerta to be metabolized in the brain so that it can do its work. Dr. Patterson offered no testimony to the contrary. Thus, the evidence shows that Student obtained these low scores on rapid picture naming, visual matching, and probably decision speed before his medication took effect. 113. In addition, District expert Dr. Susan Clare testified credibly that Student received average scores in the area of processing speed based on Ms. Anderson’s and Dr. Glidden’s administration of the WISC-IV, which also tests processing speed. The WISC-IV compartmentalizes processing speed under coding and symbol search. Student received a standard score of 100, which is the 50th percentile, on the WISC-IV in the area of processing speed in February 2005 when assessed by Ms. Anderson, and a standard score of 100, also the 50th percentile, in this area in November 2005 when assessed by Dr. Glidden. Student’s scores were in the average range for processing speed on these two administrations of the WISC-IV. Although it is unusual for a psychologist to administer the WISC-IV within nine months of a previous administration because of the practice effect and the potential of skewing results and increasing error, Dr. Glidden administered that instrument just nine months after Ms. Anderson did. However, the processing speed portion of the test is less subject to increased score as a result of practice effect. Therefore, Dr. Patterson’s testimony regarding Student’s processing speed deficits was not persuasive. 114. Therefore, while the facts do not support a finding that Student has processing speed disorders, the evidence is clear that Student as a processing disorder in attention, which is one of the basic psychological processes for purposes of determining SLD eligibility. Is there a Severe Discrepancy Between Student’s Intellectual Ability and Achievement? 115. A severe discrepancy between intellectual ability and academic achievement may be demonstrated by a comparison of “a systematic assessment of intellectual functioning” and “standardized achievement tests.” A severe discrepancy is greater than 1.5 multiplied by the standard deviation of the computed differences between the two types of tests. Student contends, based on Dr. Patterson’s testimony, that Student has a severe discrepancy between ability and achievement in the area of written expression. 117. As stated in Factual Finding 77, District staff administered the WJTA in May 2006, and Student scored in the average range in broad written language, with a standard score of 104. Compared to Student’s IQ of 101, this did not indicate a severe discrepancy. In addition, Student’s writing sample produced for the May 2006 assessment was on grade level and met ending second grade standards. The incidents reports and letters Student wrote while in the office were on grade level. Student’s second and third grade teachers testified credibly that he could write at grade level. During both school years at issue, Student was performing at grade level in written language. 118. In addition, Dr. Clare has administered the PIAT-R, and is familiar with that test. Dr. Clare testified credibly that Student’s scores on the PIAT-R show that he can learn reading, writing, and arithmetic. While Student’s standard score of 81 in written expression on the PIAT-R indicates that written expression is a relative weakness for Student, his score of 81 is within the average range. Dr. Clare did not see a processing disorder that would account for writing difficulty in Student. Based on her review of Dr. Patterson’s report, it appears to Dr. Clare that Student is capable of performing better in writing. 119. Moreover, Dr. Clare has also administered the WJ-III and the WJTA writing samples and fluency tests. Dr. Clare testified credibly that the WJ-III and the WJTA have excellent reliability in testing written language and in general. These two instruments have comparable measures, and would be expected to yield more accurate results than if the tester administered a Woodcock Johnson test and another measure, such as the PIAT-R. 120. In evaluating the weight of the testimony on both sides, District’s testimony is more persuasive. In addition to the reasons stated above, Dr. Patterson’s testimony is entitled to less weight because Dr. Patterson had not spoken with anyone from District regarding Student. Dr. Patterson conceded that it is best practice to talk with teachers and to observe the student in a classroom setting in conducting an assessment. Dr. Clare testified credibly that in making a determination as to whether a student requires special education, it is important to get information from the student’s teachers regarding the Student’s performance in comparison to the performance of other students, to get portfolio information from the student’s teachers, and to get reports from the teacher regarding the student’s behavior, academics, and social functioning. A classroom observation is required by the state as part of a psychoeducational battery. In Dr. Clare’s 15 years with District, she never conducted a psychoeducational evaluation without interviewing the student’s teacher. Dr. Clare also conducted personal observations as part of her psychoeducational assessments because she wanted to see the student herself, rather than just receive a report regarding the student’s classroom functioning. Likewise, Ms. Bath testified credibly that she never does an assessment without observing the child in his or her educational placement interviewing the student’s teacher in order to assess a student’s performance and how he or she is using the his or her skills. 121. In addition, Dr. Patterson was not aware that District obtained a writing sample from Student in May 2006 as part of its assessment. When shown the writing sample obtained from Student by District, Dr. Patterson testified that it contained “discrete” sentences and multiple errors. However, Student’s second grade teacher, Ms. Kuipers, and Student’s resource specialist, Ms. Weigand, both testified credibly that Student’s writing sample from May 2006 was on grade level and met ending second grade standards. 122. Based on the foregoing, District’s evidence was more persuasive. According to District, Student’s intellectual ability as shown by a full-scale IQ score of 101, and his academic achievement in broad written language as shown by a score of 104 on the WJTA, were both in the average range. Therefore, Student does not exhibit a severe discrepancy between intellectual ability and achievement and, therefore, he is not eligible for special education under the category of SLD. Can the discrepancy be corrected through other regular or categorical services offered within the regular instructional program? 123. Even if Student had a severe discrepancy between ability and achievement, he would be eligible for special education under the category of SLD only if the discrepancy could not be corrected through other regular or categorical services offered within the regular instructional program. 124. Ms. Bath concluded, as a result of her June 2006 assessment of Student, that he was not eligible for special education and related services under the criteria for SLD because he was benefiting from the general education program and making progress without special education services. When Student did work, it was good. In May 2006, Student was earning all A grades and was promoted to the fourth grade at the end of the school year. On the STAR test administered in the spring of 2006, Student, without modifications, scored in the proficient range in English/language arts and in the advanced range in mathematics. In the third grade, Student again earned all A and B grades, except for one C grade. On the spring 2007 administration of the STAR, Student again, without modifications, scored in the proficient range in English/language arts and in the advanced range in math. 125. Student was benefiting from his education and was not eligible for special education, despite his problems completing work. Ms. Bath testified that Student does not need frequent repetition, as would be provided in special education, in order to learn. Based on the testing and information provided by teachers, when Student learns something, he retains it. Student’s expert, Dr. Patterson, conceded that it is fair to say that Student is learning in the general education environment. 126. Based on the foregoing, Student was learning in the general education program and did not require special education. 127. Student also contends that he is eligible for special education under the category of other health impairment (OHI). In order for a student to be eligible for special education under the category of OHI, a student must have “limited alertness, including a heightened alertness to environmental stimuli” that is due to a chronic condition such as attention deficit disorder or ADHD and that adversely affects the student’s educational performance. Even if a chronic condition such as ADHD adversely affects a student’s educational performance, the student must still be found to require special education and related services. Does Student have limited alertness from a chronic condition based on his ADHD? Does Student’s ADHD adversely affect his educational performance? 129. The next issue for purposes of determining OHI eligibility is whether Student’s ADHD adversely affects his educational performance. Dr. Patterson testified credibly that Student’s inability to attend and sustain writing as a result of “waxing and waning of attention” and distractibility and impulsivity is adversely affecting his educational performance. Specifically, Dr. Patterson contends that “the lost time from the classroom because of behaviors linked to this disorder appear to have clearly impacted his educational progress.” Student displays a lot of off-task behaviors, according to Dr. Patterson. Student contends that he is “losing educational benefit daily.” While Student has performed at or above grade level in the classroom and has demonstrated grade level performance on standardized tests of academic achievement, Dr. Patterson’s testimony that Student will eventually fall behind his peers was credible. Student’s educational performance is adversely affected by virtue of the fact that he is not writing in class because he is losing opportunities to practice, and District experts testified credibly that practice is necessary for improvement. Does Student require special education that cannot be provided with modification of the regular school program? 130. Even if a child has an SLD or OHI that adversely affects the child’s educational performance, in order to be eligible for special education either under the category of OHI or SLD, the child must require special education and related services that cannot be provided with modification of the regular school program. 131. Dr. Patterson testified that Student needs to be in a restrictive setting for at least part of the day in a resource program, or he needs to be in a special day class. According to Dr. Patterson, Student works better when he as the opportunity to make choices, and in a mainstream class it is difficulty to set up pa program with choices. In addition, Student responds better to a structured environment, and it would be easier for him to work with fewer other students in the classroom. Also, it would be easier to deal with Student’s refusal to work in a small group. Dr. Patterson explained that Student has oppositional components to his behavior and many characteristics of oppositional defiant disorder, which is often comorbid with ADHD. According to Dr. Patterson, one cannot win in attempting to work with a child with oppositional defiant disorder. District attempted “more simplistic” behavior techniques and, while Dr. Patterson gave District credit for trying, it was his opinion that some of the techniques used by District actually empowered Student, and the techniques stopped working after a time. These recommendations are based on Dr. Patterson’s record review and testing of Student for one day. However, Dr. Patterson is not a behaviorist, and he did not see Student in the classroom or talk to his teachers. 132. District’s expert, Dr. Susan Clare testified more credibly with regard to whether Student qualified for special education. Dr. Clare holds a doctorate in educational psychology and a Master of Science degree in speech pathology and audiology. Dr. Clare had a decades-long career in public schools as a resource teacher, special day class teacher, and a school psychologist. Since her retirement from her position with District, she has been consulting for school districts and families and, among other things, performing psychoeducational and behavioral assessments. Over the course of her career, she has performed hundreds of initial assessments for eligibility and reevaluations. She is qualified to administer, and has administered, virtually all testing instruments used by school psychologists. She has worked with students from all categories of eligibility for special education, except for students with visual impairment. She has analyzed eligibility issues based on ADHD, SLD, ED, autism, and OHI. In addition, she is a board certified behavior analyst, and she participates in developing behavior plans for students. Dr. Clare is also an adjunct faculty member for California State University, Fresno, and teaches advanced applied behavior analysis. Dr. Clare has practiced in other states, including Utah and Washington, and is a licensed prescribing psychologist in one of those other states. Dr. Clare has 33 years of experience in the educational field. 133. Dr. Clare is very familiar with the eligibility criteria under SLD and OHI. Regardless of whether a student has a disorder of a basic psychological process and a severe discrepancy between ability and achievement for purposes of SLD eligibility, and regardless of whether the student has a health impairment for purposes of OHI, the student does not qualify for special education unless he or she is not making educational progress in the general education program. 134. Dr. Clare has not met Student. However, she reviewed Dr. Patterson’s report, and she has administered every testing instrument that Dr. Patterson administered to Student, except for the DTLA test of motor speed and precision. According to Dr. Clare, Dr. Patterson reported a full-scale IQ in Student of 101, with a range of 95 to 106, meaning Student’s full-scale IQ scores falls somewhere within this range. This places Student within the range of average cognitive ability. According to Dr. Patterson’s administration of the WISC-IV as described in his report, Student has average or above average scores in all areas of cognitive ability, except for decision speed, rapid picture naming, and visual matching, which are all cognitive fluency measures related to with processing speed.19 Based on all of Student’s scores, Ms. Clare testified credibly that Student learns well if allowed to use all of his cognition, but does not do as well if he is limited to visual information. However, as discussed above, Dr. Patterson administered the WJ-III before Student’s medication took effect, and Student’s scores in the area of processing speed based on Ms. Anderson’s and Dr. Glidden’s administration of the WISC-IV were much higher, which indicates that the scores Dr. Patterson obtained were incorrect. 135. In Dr. Clare’s opinion, which was credible, Student did not require special education and related services that could not be provided with modification of the regular program. It did not concern Dr. Clare that Student’s verbal skills were higher than his writing skills because learning to write follows a developmental sequence and a child can talk freely before the child can write freely, and the ability to speak is at a higher language level than the ability to write in second and third grades. Dr. Clare is of the opinion that Student needs practice in order to increase his writing skills. 136. Factual Findings 130 through 135 demonstrate that even assuming Student has an SLD or OHI, he does not require special education and related services. As determined above in Factual Finding 61, Ms. Bath established that Student was progressing academically and benefiting from his education. District established that Student had the ability to complete his work, and District provided accommodations through Student’s 504 plan to reduce the amount of work. The essence of District staff members’ credible recommendations was that Student requires the type of modifications, i.e., additional time, organizational support, work load modifications and time management support that could be provided through modification of the general education curriculum. 137. Ms. Bath proposed a positive behavior plan in her June 2006 assessment of Student. The 504 team agreed that the behaviors occurred when Student was pressed to do his work or a nonpreferred task in class or when he believed he was not capable of doing the work well. Yet, some of the things Student refused to do were within his ability level. The team agreed that Student’s behaviors occurred and he refused to do work because, by doing these things, he gained attention and was able to avoid work or tasks that he perceived as nonpreferred. It appeared, according to Ms. Bath, that he was avoiding work and did not want to do it. Student had shown on other occasions that he was capable of doing the work he was being asked to do. 138. Ms. Wandler established that Student could access the curriculum in the general education classroom. Ms. Wandler testified testified credibly that the BSP resulted in Student’s use of alternative behaviors, and that demonstrated the effectiveness of the BSP and that it worked. The BSP was an accommodation that the general education staff could implement in the general education program. A district’s child find obligation toward a specific student is triggered when there is reason to suspect a disability and that special education services may be needed to address that disability. (Dep’t of Educ. v. Cari Rae S. (D. Hawaii 2001) 158 F.Supp.2d 1190, 1194.) Neither the statutes nor the regulations establish a deadline by which time children who are suspected of having a qualifying disability must be identified and evaluated. Issue 1: Did District have a child find obligation to assess Student in areas of suspected disability, including visual-motor integration, writing, working memory, social/emotional functioning, and behavior, from June 20, 2005 through June 20, 2007? 9. Based on the above Legal Conclusions and Factual Findings 13 through 17, District had previously assessed Student in this area in January 2005. From February 2005 onward, District had been providing services to Student through a Section 504 plan. Although Student was progressing academically throughout the 2005-2006 school year, his work completion problems escalated in the middle of the 2005-2006 school year. District convened a Section 504 team meeting, which was held on February 28, 2006. In March 2006, after that Section 504 meeting, Ms. Bath contacted Student’s mother regarding Ms. Bath’s suspicion that Student would qualify under the category of ED. District produced an assessment plan for parents to sign in April 2006. Parents signed that assessment plan, and the assessment conducted in May and June 2006. Thus, District referred Student for assessment in the area of visual-motor integration in the 2005-2006 school year within a reasonable time after it suspected that Student may have had a disability and may have needed special education and related services. 10. Based on the above Legal Conclusions and Factual Findings 18 and 19, District had previously assessed Student in this area in January 2005. From February 2005 through the day of hearing, District was providing services in this area to Student through a Section 504 plan. Although Student was progressing academically throughout the 2005-2006 school year, his work completion problems escalated in the middle of the 2005-2006 school year. District convened a Section 504 team meeting on February 28, 2006, contacted Student’s mother in March 2006 regarding its suspicion that Student would qualify under the category of ED, and produced an assessment plan for parents to sign in April 2006. Parents signed that assessment plan in April 2006, and District conducted its assessment of Student in the area of writing in May and June 2006. Thus, District referred Student for assessment in the area of writing in the 2005-2006 school year within a reasonable time after it suspected Student may have had a disability and may have needed special education and related services. 11. Based on the above Legal Conclusions and Factual Findings 20 through 22, District had no reason to suspect a disability in the area of working memory in the 2005-2006 school year, or that Student may have needed special education and related services for such a disability. However, District referred Student for assessment in the area of cognitive functioning in April 2006, which was within a reasonable time after District suspected Student may have had a disability in other areas, as discussed above. 12. Based on the above Legal Conclusions and Factual Findings 23 through 27, District had previously assessed Student in these areas in January 2005. From February 2005 onward, District was providing services to Student in these areas through a Section 504 plan. Although Student was progressing academically throughout the 2005-2006 school year, his work completion problems escalated and he began exhibiting aggressive behaviors toward other students in the middle of the 2005-2006 school year. District convened a Section 504 team meeting, held on February 28, 2006, to discuss these issues. Ms. Bath contacted Student’s mother in March 2006 regarding its suspicion that Student would be eligible for special education under the category of ED, and produced an assessment plan for parents to sign in April 2006. Student’s parents signed that plan in April 2006, and District assessed Student in these areas in May and June 2006. Based on the foregoing, District referred Student for assessment in the areas of social/emotional functioning and behavior during the 2005-2006 school year within a reasonable time after it suspected Student may have had a disability and may have needed special education and related services. 13. Based on the above Legal Conclusions and Factual Findings 34 through 41, District had no reason to suspect, during the 2006-2007 school year, that Student had a disability in the area of visual-motor integration and needed special education and related services in that area. Student had been assessed in that area in May and June 2006, and his IEP team determined in June 2006 that he was not eligible for special education and related services. The assessment plan of October 2006, which was signed by parents, did not propose assessment in this area. Student was being provided services through his Section 504 plan and was progressing academically. Based on the foregoing, District had no obligation to assess Student in the area of visual-motor integration during the 2006-2007 school year. 14. Based on the above Legal Conclusions and Factual Findings 42 through 45, District had no reason to suspect, during the 2006-2007 school year, that Student had a disability in the area of writing and needed special education and related services in that area. Student had been assessed in that area in May 2006, and his IEP team determined in June 2006 that he was not eligible for special education and related services. Although Student was not producing a lot of written work in the fall of 2006, he was performing well academically. The assessment plan of October 2006, which was signed by parents, did not propose assessment in this area—it proposed an “update” of Ms. Weigand’s May 2006 assessment “as needed.” Ms. Weigand updated her previous assessment by conducting classroom observation and talking with Student’s teacher. She administered no further testing. In January 2007, Student’s aggressive behaviors returned, and his work completion problems escalated. From February through April 2006, District provided Student with extensive services by Ms. Kathy Wandler through his Section 504 plan in order to increase his writing productivity, and the plan was working and Student was progressing academically. Student’s writing was on grade level. Based on the foregoing, District had no obligation to assess Student in the area of writing during the 2006-2007 school year. 15. Based on the above Legal Conclusions and Factual Findings 46 through 49, District had no reason to suspect a disability in the area of working memory in the 2006-2007 school year or that Student may have needed special education and related services for such a disability. District assessed Student in the area of working memory in May 2006, as discussed above, and his IEP team determined in June 2006 that he was not eligible for special education and related services. There was no reason to suspect a disability in that area during the 2006-2007 school year. 16. Based on the above Legal Conclusions and Factual Findings 50 through 61, until May 2007, District had no reason to suspect, during the 2006-2007 school year, that Student had a disability in the area of social/emotional functioning and behavior, and needed special education and related services in those areas. Student had been assessed in those areas in May 2006, and his IEP team determined in June 2006 that he was not eligible for special education and related services. Student did not exhibit aggressive behaviors in the beginning of the 2006-2007 school year. The assessment plan of October 2006, which was signed by parents, did not propose assessment in this area. However, the assessment plan did propose observation by behaviorist Kathy Wandler. Ms. Wandler conducted extensive observations and developed a BSP prior to the December 6, 2006, IEP team meeting. In addition, Ms. Bath updated her May 2006 assessment in December 2006 after conducting an observation of Student in the classroom and talking with his teacher. In January 2007, however, Student’s behaviors returned. In February 2007, Student’s parents consented to the implementation of the BSP. Student was provided extensive services by Ms. Kathy Wandler through his Section 504 plan, and Ms. Wandler spent many hours over several months training Student’s teacher, in order to increase Student’s writing productivity and reduce his inappropriate behaviors. Ms. Wandler’s BSP was working in increasing writing productivity and reducing inappropriate behaviors. However, after Ms. Wandler faded out in late April 2007, Student’s behaviors returned, and he began tripping and hitting other students and behaving aggressively and inappropriately. Therefore, in May 2007, District had a reason to suspect that Student had a disability in the area of social/emotional functioning and behavior. 17. However, District did not have reason to suspect that Student may have been in need of special education at that time because, as Ms. Bath credibly testified, Student was achieving at or above grade level and earning good grades, was progressing academically, was learning, and was benefiting from his education, despite his behavior problems. Ms. Bath’s opinion was based on her assessment of Student, Ms. Weigand’s academic assessment of him, his grades, reports from teachers, and his scores on the STAR. The only indication that Student’s educational performance is suffering was that he would not perform many of his writing tasks in class and his aggressive behaviors and problems with work completion resulted in him being sent out of the class at times. Ms. Bath testified credibly that Student did not need special education services, such as frequent repetition, in order to learn. Based on the testing and information provided by teachers, when Student learned something, he retained it. Student’s expert, Dr. Patterson, conceded that it is fair to say that Student was learning in the general education environment. Consistent with testimony of Student’s teachers and people who assessed him that Student “wanted to do things his way” and would refuse to work at times, Dr. Patterson testified credibly that Student had traits of oppositionality, and Dr. Patterson’s report stated that Student had “apparent oppositional defiant disorder.” In light of this, and in light of all of District’s previous assessments, District had no reason to suspect that Student required special education, and it was not required to assess Student in the areas of social/emotional functioning and behavior during the 2006-2007 school year. Issue 2: Did District fail to assess Student in areas of suspected disability, including visual-motor integration, writing, working memory, social/emotional functioning, and behavior, as part of its May/June 2006 assessment? 18. Based on the above Legal Conclusions and Factual Findings 66 through 74, District assessed Student in the area of visual-motor integration in May 2006. District had previously assessed Student in this area in January 2005. District occupational therapist Erin Dolin assessed Student in the area of visual-motor integration in May and June 2006. In addition, Ms. Bath reported the scores obtained by Student on the instruments administered by Ms. Anderson and Dr. Glidden to assess Student in the area of visual-motor integration. Thus, District assessed Student in the area of visual-motor integration as part of its May and June 2006 assessment. 19. Based on the above Legal Conclusions and Factual Findings 75 through 82, District assessed Student in the area of writing in May and June 2006. District had previously assessed Student in this area in January 2005. District resource specialist Terri Weigand conducted an assessment of Student in the area of writing in May 2006. In addition, Ms. Dolin’s occupational therapy assessment included assessment in the area of writing. Thus, District assessed Student in the area of writing as part of its May and June 2006 assessment. 20. Based on the above Legal Conclusions and Factual Findings 83 through 87, although District had no reason to suspect a disability in the area of working memory in the 2005-2006 school year or that Student may have needed special education and related services for such a disability, District assessed Student in the area of working memory in May 2006 because Ms. Bath reported in her psychoeducational evaluation the scores obtained by Dr. Howard Glidden in November 2005 and by Ann Anderson in January 2005. These scores were consistent, and Dr. Glidden’s was current. Thus, District assessed Student in the area of working memory during the 2005-2006 school year. 21. Based on the above Legal Conclusions and Factual Findings 88 through 104, District assessed Student in the areas of social/emotional functioning and behavior in May and June 2006. District had previously assessed Student in these areas in January 2005. District again assess Student in these areas in May and June 2006. That assessment included the administration of the BASC by Ms. Bath to Student, Mother, and Student’s two second grade teachers. Ms. Bath had many meetings and conversations with Mother and Father from February 2005 through April 2006, and they informed her on an ongoing basis of Student’s difficulties in the area of social/emotional functioning and behavior. School nurse Sue Holmen interviewed Mother regarding Student’s social/emotional functioning and behavior in May 2006, and reported her results, which Ms. Bath considered. Ms. Bath interviewed Student’s two second grade teachers regarding these areas, and she knew Student well. Ms. Bath’s assessment of Student included a functional behavioral analysis. Based on the foregoing, District assessed Student in the area of social/emotional functioning and behavior as part of its May and June 2006 assessment. Issue 3: Did District deny Student a free appropriate public education by failing to find him eligible for special education and related services under the category of specific learning disability (SLD) or other health impairment from June 20, 2005, through June 20, 2007? 32. Based on the above Legal Conclusions, and on Factual Findings 106 through 126 and 130 through 138, District did not violate its child find obligation by failing to find Student eligible for special education and relates services under the category of SLD from June 20, 2005, through June 20, 2007. Dr. Patterson’s testimony that Student also had processing speed disorders was not ultimately persuasive, and not consistent with other testing of Student in the area of processing speed. However, the evidence established that Student has a disorder in one of the basic psychological processes—attention. Student does not, however, have a severe discrepancy between ability and achievement. Student’s overall IQ is 101, which is in the average range. His achievement in the area of written expression is also in the average range, and at grade level, pursuant to credible testimony by District employees. Dr. Patterson’s testimony that Student’s score of 81 on the PIAT-R establishes a severe discrepancy between ability and achievement is not credible. Dr. Patterson had not spoken with any of Student’s teachers, and had not observed him in the classroom, and Student’s score on the PIAT-R was much lower than the writing achievement he showed at school. Dr. Patterson was not aware that District had obtained a writing sample from Student in May 2006, and that the sample was at grade level. District expert Susan Clare’s testimony that Student did not have a severe discrepancy between ability and achievement was credible. 33. Moreover, even if Student had a severe discrepancy between ability and achievement, he would be eligible for special education only if the discrepancy could not be corrected through other regular or categorical services offered within the regular classroom and if he required special education. InHood,supra, the Ninth Circuit held the student to be ineligible for special education and related services under the category of SLD because she was performing at grade level or higher, and her discrepancy could be corrected through regular or categorical services offered within the regular instructional program. 34. Similar to the student in theHood case, Student does not need special education and related services, as he is able to progress in the general education environment with reasonable accommodations and the BSP provided through his Section 504 plan. Student is performing at or above grade level in all areas. He consistently maintains grades in the A and B range, with his lowest grade being one C. He scored in the proficient range on the STAR test in English/language arts, and in the advanced range in mathematics. Therefore, Student is ineligible for special education under the category of SLD. 35. Based on the above Legal Conclusions, and on Factual Findings 127 through 138, District did not violate its child find obligation by failing to find Student eligible for special education and relates services under the category of OHI from June 20, 2005, through June 20, 2007. Dr. Patterson testified credibly that Student has ADHD, and that Student’s ADHD adversely affects his educational performance because Student is losing some educational benefit when he is off task or spending time in the office as a result of his behaviors. However, as discussed in Factual Findings 33 and 34, Student is not eligible for special education and related services under the category of OHI because he can access his education through regular or categorical services offered within the regular instructional program. While the student had areas of relative weakness, he was achieving and receiving educational benefit in the regular education classroom. From June 2005 through June 2007, Student never received a grade of less than a C on a report card, and received mostly A and B grades. He was performing at or above grade level in the areas of mathematics, reading, and writing according to District assessment results, which were credible. His STAR test results were consistent, with Student performing at the proficient range in English/language arts and at the advanced range in mathematics. Therefore, Student is ineligible for special education under the category of OHI.
2019-04-25T15:58:45Z
https://www.californiaspecialedlaw.com/wiki/hearing-decisions/oah-2007060634/
Today is the last day of the 2010 Computational Management Science (CMS) conference that is taking place in Vienna, Austria. The weather is now cool and rainy but it should be clearing up later today. It has been absolutely wonderful to listen to talks on topics ranging from the design of transportation networks for hazardous material shipments, to the analysis of earmarks and financing in humanitarian operations to the formulation of teams and resource allocation. I have been going to the supply chain, transportation, and networks stream of talks and also to several of the game theory and stochastic programming ones. Many of the participants have told me that they will be coming back to future CMS conferences since they have learned so much from this conference and have also had a wonderful time socially. The next CMS conference will take place in Spring 2011 in Neuchatel, Switzerland. Today I will be attending another plenary lecture (the final one and the closing session of this conference) which will be delivered by Dr. Claudia Sagastizabal who is Argentinian but works in Rio de Janeiro, Brazil. I commend the organizers for having two female plenary speakers (out of four)! My plenary talk, Supply Chain Networks: Challenges and Opportunities from Analysis to Design, can be accessed, in pdf format here. Given the number of positive comments that I have received from audience members, I believe that I got the importance of this topic and the fascinating applications across. Plenary talks play a very important role at scientific conferences. I have already extended my congratulations to the other plenary speakers -- Professor Campi of the University of Brescia in Italy and Professor Pistikopoulos of Imperial College in London. The content of their talks as well as their delivery of them were fabulous. I am getting intellectually spoiled here by the originality of and creativity behind the research presented. The organizers have done a magnificent job of getting this conference together and I thank them as well. My flight from Logan Airport in Boston to Frankfurt via Lufthansa was wonderful except that there was a dog underneath my seat named Bailey, who is a shitzu, and who whined painfully for hours. Some of us thought that the plane was experiencing mechanical difficulties. As I wrote back in May on this blog, on my flight back from Honolulu, there was a cat next to me that a serviceman had brought on board. Bailey's owner said that it was his third trip to Germany and, for some reason, he could not settle down. I managed to converse a bit with my seatmate and read the NYTimes but then decided that with the conference and my speaking engagements in Europe I had better get some sleep. Luckily, a stewardess found me another aisle seat so that I could catch some shuteye. My new seatmate was a postdoc from Germany, who is Italian, and who was returning from a Gordon conference in New Hampshire so we ended up having a delightful conversation about scientific research and even the World Cup and we even managed to get some sleep. The Lufthansa plane had terrific and very comfortable seats and the bathrooms (first time I had seen this) were in the middle of the plane and a staircase down (lots of legroom there). After a brief layover at the Frankfurt airport I caught my Austrian Airways flight to Vienna. Yesterday was the first day of the Computational Management Science conference which is taking place in glorious Vienna, Austria. The weather here is sunny and cool with gentle breezes -- simply perfect and a very welcome break from the heat back in the US. Yesterday I gave the opening keynote talk on supply chains networks and had a chance to speak at another session that I had organized. In the latter, one of my former doctoral students, Professor Tina Wakolbinger also spoke, and it is always extra special to see former students doing so well. In addition, a group of us, including colleagues from Texas and Florida, went out to lunch together. Last evening a group of us was invited to Professor Georg Pflug's home in Vienna for a lavish dinner buffet. Professor Pflug is the organizer of this conference. It was the perfect evening with colleagues from around the world conversing and dining in an elegant setting. It will be hard to leave this magnificent city! Of course, I already also managed to indulge in some delicious Mozart kugel chocolates. I will soon be on a two week trip to Europe to speak at conferences and have begun to pack. Packing a suitcase is an optimization problem to which we can apply the tools of operations research. I am assuming one carryon suitcase which must fit into the overhead compartment in the airplane(s) so I have to deal with a limited volume. I am also limited as to the weight of the fully packed suitcase since I insist on carrying my suitcase on the plane and having it with me in my travels. (I recall not so fondly having to dump some of my favorite shoes at the Auckland airport since I was over the weight limit and refuse, if at all possible, to check my luggage, since my suitcase from Japan, lost two decades ago, compliments of United, has yet to arrive.) Also, every academic has his share of horror stories of suitcases that arrived at the destination after the talks were given. One should pack for the weather (so I have been checking the weather forecasts for my two destinations) and for the occasion. I have to give two plenary talks so I have to look professional for the duration of the two conferences (besides my European mother instilled this in me and even as a child she dressed me in suits). In Vienna, the temps are forecasted to be rainy and cooler next week whereas Yalta is a summer tourist destination on the Black Sea and can be quite warm (but probably cooler than in the steamy Northeast of the US now). Of course (I wonder whether male academics care about this), I believe that the clothing that one takes should be color-coordinated and in good taste (and one should not wear the same outfit every day). Plus, with the right combination of skirts, tops, and accessories (shoes matter, too), one can maximize the number of outfits that one can generate. One has to factor in those evening banquets and conference excursions. Going for this period of time, one needs to also plan for including some exercise, so I will make do with a good pair of sneakers and some colorful t-shirts and shorts. Luckily, conferences in the summer only require lighter-weight clothing! So, I need to identify which resources (pieces of clothing) I should take with me in the carryon suitcase so that they do not exceed the volume of the suitcase nor the weight limit and maximize some representation of the utility or satisfaction that I get from bringing and wearing them. Additional constraints include "matching" type of constraints -- I can't very well just show up in only tops or only bottoms but need each day a complete outfit when I venture out. One of my most challenging extended business trips took place a few years ago when I had to speak in Cyprus (where it was very hot), then fly to Iceland for a conference (with temps in the mid40s), and then back to give a talk in Erice, Sicily, where it was also very warm. Luckily, my husband and daughter met me in Iceland and brought warm clothing so I did not have to deal with that part. I emailed my presentations to the organizers of the two conferences so I don't have to lug a laptop but have backups on a pendrive. I solved the above optimization problem and am delighted with the result. I could include a photo of my suitcase contents but, instead, I will surprise you with photos after this trip. And, remember to always take an umbrella! Supernetworks are big in China and two of my collaborators, Professor June Dong and Professor Patrick Qiang, are now in Shanghai discussing the formation of a sister Supernetworks Center to the Virtual Center for Supernetworks that I direct. With June Dong I wrote the 2002 book, "Supernetworks: Decision-Making for the Informatiom Age," and last year Patrick Qiang and I co-authored the "Fragile Networks" book. My last trip to China was back in 2006 when I spoke at the Fudan Management Science Forum in Shanghai on Supernetworks: Management Science for the 21st Century. I also served on the Fudan Management Science Prize committee. It was a terrific conference and trip, especially since June Dong was also there and she is a former native of Shanghai, China. Such well-known researchers as Mike Pinedo, Charles Corbett, and Eitan Zemel were also there. I took tons of photos. Patrick Qiang recently gave a presentation, "Vulnerability of Fragile Networks," at the School of Management at the Shanghai University for Science and Technology and was hosted by June. Above are photos taken at his presentation, with his host, a group photo with the audience, and an announcement on the university home page, which a student of mine, helped to translate. The audience for Patrick's talk even included students of Professor Daoli Zhu, who received the Fudan Management Science prize in 2006. Yesterday, I met with a group of my doctoral students at the Isenberg School of Management. I wanted to make sure that I gave them proper directions on their research while I was away at conferences in Europe. As I came up to my office there was a student, who had recently defended his PhD dissertation successfully, and wanted to say good-bye to me. He greeted me with a beautiful bouquet of flowers in appreciation (featured above). I was very touched that he would take the time to do this and to leave a "last" lasting impression. Now, as a "Dr." he will, in a few days, be starting his excellent position in statistical and economic consulting with a major consulting firm in Washington DC. Although I was not on his dissertation committee, as Area Coordinator of the Management Science track of our PhD program, I spend a lot of time helping to recruit doctoral students, who come from around the world, and in making sure that they are comfortable and succeed upon matriculation. This latest successful PhD to graduate from our program is from Europe. He finished his PhD in 4 years, despite the untimely death of his original dissertation advisor last December. In his most recent email message to me, he said: Thank you for all your support throughout my doctoral studies. If not our phone interview and your invaluable guidance four years ago, I don't think I would be in the program at all. Thank you! Wouldn't the world be a better place, if people took the time to say hello to their colleagues and co-workers, if bosses treated their employees with respect and kindness, and if we all supported one another and celebrated the successes of our students, staff, faculty, and even administrators?! Manners go a long way and kind words can make any day, even a difficult one, truly special. The above student will continue to succeed because of his thoughtfulness and courtesy. The circle of academic life is very special. As a community, though, we need to regularly celebrate the achievements and milestones of all involved and, at the very least, to extend our congratulations to students who get into our programs, who make good progress, sometimes, under challenging circumstances, and who, ultimately, earn their degrees and get terrific jobs. I just completed listening to the INFORMS podcast interview with Admiral Mike Mullen, the Chairman of the Joint Chiefs of Staff. He holds a Masters degree from the Naval Postgraduate School in Monterrey, California. It is a fantastic interview, conducted by Barry List, the Communications Director of INFORMS, and Peter Horner, the Editor of ORMS Today. In the interview, Admiral Mullen discusses the critical importance of operations research (O.R.) in military applications and how the needs for O.R. have changed over the past two decades. He talked about the issues of resources and constraints and the importance of optimization and systems thinking and even discussed linear and nonlinear programming. I was really pleased to hear his ideas on logistics and supply chains and how important optimization is also in business, which was gratifying coming from a top military leader. In addition, his points about military assistance in disasters were extremely relevant and lessons learned from Haiti. He spoke of constraints, resources, and flows. He also emphasized the importance of systems thinking and how O.R. helps with framing problems, which is very useful. He also emphasized how economic and business strength affects security. He asked: "How do I optimize flowing the right information at the right time to those in the field and during warfare?" You can listen to the interview in podcast form here. This interview should not be missed! Three Women Write a Book on Environmental Networks -- Was it Too Early? In 1999, two of my former doctoral students, Kanwalroop "Kathy" Dhanda and Padma Ramanujam, both of whom were from India, and I had our book, "Environmental Networks: A Framework for Economic Decision-Making and Policy Analysis," published by Edward Elgar Publishing in the series: New Horizons in Environmental Economics. The book was based on years of our research and publications that we had authored and co-authored that had appeared in such top journals as: Operations Research, Transportation Science, Networks, the Journal of Regional Science, and Energy Economics, among others. I always wondered, how many books, even of a nontechnical nature, have been published with three (or more) females as co-authors? Now, with parts of the world reporting record-breaking summer heat and Congress mired in inertia regarding the passage of a major climate bill, as David Leonhardt writes in The New York Times, I thought it important to bring this book to the renewed attention of policy makers. The book describes rigorous tools for the determination of pollution permits and taxes, and associated environmental emissions, from both stationary sources, such as firms, as well as from moving sources of pollution, such as vehicles. Padma's doctoral dissertation was on the latter topic and it was awarded the 1999 Transportation Science Section of INFORMS dissertation prize. The Harvard economist, Richard Stavins, whose work we cited in our book, told Leonhardt recently that he would actually prefer a bill that cut emissions less in the short term but created a template for much bigger cuts in the future. Success, to me, would be the beginning of political acceptance of carbon pricing, he said. Leonhardt believes that: A utility-only cap, even a flawed one, really would represent a whole different kind of progress than a souped-up version of fuel economy rules. A cap — any decent cap — remains the best benchmark of success. Interestingly, the doctoral dissertation of my most recent PhD student to graduate, Dr. Trisha Woolley, entitled: "Sustainable Supply Chains: Multicriteria Decision-Making and Policy Analysis for the Environment," focuses on the electric power industry and utilities and considers both carbon taxes (either centralized or decentralized) as well as pollution permits. In the next few weeks, before the Senate breaks for its August recess, or in September, before the midterm election campaign takes over, major issues regarding the climate bill will be decided, we expect and hope. Rachel Carson with her book, "A Silent Spring," changed the world. It is time for another book to do the same and break the legislators out of their inertia. And yes, women do write "Big Ideas" books, something that Germaine Greer has even been emphasizing although, ironically, some of these books may be rather mathematical and scientific! I have just about finished reading Henry Petroski's book, "The Essential Engineer," which was published in 2010. It is filled with excellent ideas about why engineering is different from science and how science and engineering and their practitioners and innovators must work together to address the grand challenges today from renewable energy to reducing vulnerability and even to securing cyberspace. Henry Petroski is a professor of civil engineering and history at Duke University. Coincidentally, in today's New York Times, there is an article by William Broad, "Taking Lessons from What Went Wrong," which begins with the eye-catching sentence: Disasters teach more than successes, with the overall thesis that disasters can spur innovation. The article includes an interview with Petroski and a graphic photo of the BP oil rig disaster. Technological feats that define the modern world are sometimes the result of events that some might wish to forget, from the collapse of the Tacoma-Narrows bridge in 1940 due to winds (with no lives lost), to the collapse of the Minneapolis bridge in 2007 (with 13 lives lost), to the sinking of the Titanic on its maiden voyage (with over 1,500 deaths, some due to hypothermia), and even the World Trade Center disaster (with approximately 3,000 deaths). Now we are all reeling from the BP oil rig disaster with ups and downs on almost a daily basis as to progress or lack thereof regarding the spill containment and the propagation of the massive effects on the environment and affected economic sectors and regions. I had written earlier on this blog about forensic accounting and we had even hosted Dr. Brian Levine who spoke on his research on the forensic investigation of the Internet and mobile devices. Our modern era demands a new area of expertise -- that of forensic engineering, which should clearly have risk management and policy analysis as essential constructs to assist in lessons learned (so mistakes do not get repeated in the future). Interestingly, Petroski uses as vivid examples in both of his two books noted above the challenges of engineering design in the context of bridge design. He considers bridge designers as very creative individuals who develop mental constructs of a bridge, combined with aesthetics, and then design mathematically the functional structure, which, I might add, should last for many years and support the weight of numerous vehicles. My uncle, Stanley Jarosz, is an award-winning bridge designer, who, although he is almost 92 years old, still works several days a week at an engineering firm. He is one of my greatest inspirations and an exceptional role model and gentleman (who, I might add, is also a big opera aficionado). I had the pleasure recently of seeing my uncle and my terrific cousin, Andrew (who, I might add, is a fellow Brown University grad), in NYC. I discussed Petroski's "The Essential Engineer" with my uncle and noted Petroski's almost mystic adulation of bridge designers. Solving the grand challenges faced by our civilization will require the cooperation and the working together of our best, creative minds, as well as capturing, in a quantifiable and rigorous manner, the risk associated with the resulting innovations. There is a wonderful interview (but too short) by Ron Howard, the Director of the Academy Award winning movie, "A Beautiful Mind," with John F. Nash Jr., the 1994 Nobel laureate in Economic Sciences, in a trailer that accompanies the DVD of this movie. I had the pleasure of recently seeing both. As John Nash walks away at the end of the interview, bundled up in a warm overcoat and knit cap, he ruefully comments that he has lost so many years and he needs to get back to research since that is what matters. John Nash's contributions to game theory earned him the Nobel Prize. His work has influenced numerous disciplines, in addition to economics, notably, operations research and management science, political science, applied mathematics, and computer science. I cite Nash's classical (1950) and (1951) papers in many of my papers that deal with competition. For example, in a paper, "Supply Chain Network Design Under Profit Maximization and Oligopolistic Competition," which was published recently in the journal, Transportation Research E (2010), I devised a model in which firms seek to determine their optimal supply chain network designs in terms of manufacturing, storage, and shipment capacities, as well as product flows so as to maximize profits. The governing concept is that of a Nash - Cournot equilibrium. This model extends my earlier model in which a firm seeks to design (or redesign) its supply chain network so as to minimize its total costs associated with capacity enhancements (even from scratch) as well as the operational costs. In the latter, no competition was assumed. That study, "Optimal Supply Chain Network Design and Redesign at Minimal Total Cost with Demand Satisfaction," is in press in the International Journal of Production Economics. High tech companies, including Samsung, Hewlett Packard, and IBM, as well as apparel companies from Benetton to Zara well understand the competitive advantages of careful cost control in supply chains. In addition, more and more companies, including Frito-Lay, Tesco, P&G, and Colgate are being recognized for their supply chain performance. The analytical challenges of identifying not only the optimal capacities associated with various supply chain network activities, coupled with the optimal production quantities, storage volumes, as well as shipments are tremendous, since the possibilities of where to site manufacturing plants and distribution centers, for example, and at which capacities, may be great. Furthermore, the determination of the optimal supply chain network design (or redesign if a supply chain network already exists with some capacities) needs to be done in a rigorous manner that captures the system-wide nature of the problem. I've also recently made use of the Nash equilibrium concept in devising a model to capture the gains of possible mergers and acquisitions of firms which are competitors. That paper, "Formulation and Analysis of Horizontal Mergers Among Oligopolistic Firms with Insights into the Merger Paradox: A Supply Chain Network Perspective," is in press in the journal Computational Management Science. The algorithms that can be applied to determine the optimal designs of supply chain networks, operating either in a centralized manner or in a competitive, decentralized manner, are also reported in the above papers. I agree with Nash that it is imperative to carve out the necessary time to do research. The intellectual life of a university depends on the research of its faculty and the give and take with students through teaching and scholarship. I teach and conduct research at the Isenberg School of Management at UMass Amherst, which values interdisciplinarity. The academic departments at Isenberg clearly reflect this from my own department, Finance and Operations Management, to Resource Economics, Sport Management, and Hospitality and Tourism Management, plus even the more, shall I say, "classical" departments of Management, Marketing, and Accounting and Information Systems. My department, in the last two years, has recruited two new Assistant Professors, Dr. Ahmed Ghoniem and Dr. Senay Solak, from Virginia Tech and Georgia Tech, respectively. Their work has already earned not only accolades but grants: Dr. Ghoniem has received funding from the Qatar National Research Fund to study how to minimize air traffic congestion whereas Dr. Solak has received funding from NSF to conduct research on housing foreclosures and nonprofit organizations. Last year, we recruited the new Dean of the Isenberg School, Dr. Mark Fuller, whose appointment is also in my department (and, yes, I served on the search committees for all these hires, and chaired those of the two above). Also, based on a proposal for a cluster hire that I, with colleagues in Computer Science, Electrical and Computer Engineering, and even Communications, was involved in, we were able to recruit for a position in cyber security. Dr. Traci Hess and her husband, Dr. John Wells, both from Washington State University, will be joining my department this Fall as Associate Professors. My colleagues conduct research on hedge funds, serve on FDA panels on food safety and risk, explore issues of privacy and marketing, to highlight only a few, very cool research topics. My own research and passion is heavily focused on networks, especially on complex networks with applications as varied as supply chains, humanitarian logistics, financial networks, electric power generation, and transportation. My recent research has emphasized network design and fragility and vulnerability analysis with applications to critical needs and healthcare products from vaccines to medicines as well as sustainability. Being in a business school, one can conduct research on the latest topics and this brings an energy and enthusiasm to the classroom which are palpable. Being trained in operations research and management science (my advisor at Brown University, Dr. Stella Dafermos, was the only female at that time in Engineering and Applied Math) one has the skill set to address many important problems. Although campuses will be teaming with students, both new and experienced ones, come the Fall, during the summer, the faculty are getting prepared as they work on new breakthroughs, present their work at conferences around the globe, rework their course materials, and take the time to reflect. Soon I will be giving keynote talks in Europe -- at the Computational Management Science in Vienna, Austria and at the Yalta Optimization Conference -- Network Science in Ukraine. As I tell my students, research can really take you places and introduce you to fascinating people! Working at a university is a great privilege and responsibility and always filled with new challenges, opportunities, and, sometimes, even drama (I will write about the latter topic sometime in the future)! After the International Conference on Computational Management Science in Vienna, Austria, I will be off to give a keynote talk at the 3rd Yalta Optimization Conference, which will take place in Yalta, Ukraine, at the beginning of August. The theme of this year's Yalta conference is Network Science. Of course, this scientific conference should not be confused with the famous historic Yalta Conference of 1945! I am looking forward to seeing this very scenic location on the Black Sea and to practicing both my Ukrainian and Russian. Plus, it will be great to reconnect with wonderful colleagues and friends there and to listen to the talks. Special thanks to the organizers: Professors Butenko, Prokopyev, and Shylo, for letting me deliver the keynote talk with title and abstract below. Abstract: The growing number of disasters globally has dramatically demonstrated the dependence of our economies and societies on critical infrastructure networks. At the same time, the deterioration of the infrastructure from transportation and logistical networks to electric power networks due to inadequate maintenance and development as well as to climate change, has resulted in large societal and individual user costs. This talk will focus on recently introduced mathematically rigorous and computer-based tools for the assessment of network efficiency and robustness, along with vulnerability analysis. The analysis is done through the prism of distinct behavioral principles, coupled with the network topologies, the demand for resources, and the resulting flows and induced costs. The concepts will be illustrated in the context of congested transportation networks, supply chains under disruptions, financial networks, and dynamic networks such as the Internet and electric power networks. We will further explore the connections between transportation networks and different network systems and will quantify synergies associated with network integration, ranging from corporate mergers and acquisitions to collaboration among humanitarian organizations. I am very much looking forward to the International Conference on Computational Management Science that will take place in Vienna, Austria later this month. Vienna is a center of scholarship, culture, architecture, music, and intrigue. The last time that I was in Vienna was back in March 2009 when I spoke on Synergies and Vulnerabilities of Supply Chain Networks in a Global Economy at the Vienna University of Economics and Business Administration and was hosted by my esteemed and wonderful colleague there, Professor Manfred Fischer. At the Computational Management Science (CMS) 2010 conference, which will be at the University of Vienna, I will be giving a keynote / plenary talk, entitled, Supply Chain Networks: Challenges and Opportunities from Analysis to Design, and the abstract is below. Abstract: Supply chain networks provide the backbones for our economies since they involve the production, storage, and distribution of products as varied as vaccines and medicines, food, high tech products, automobiles, and even energy. Many of the supply chains today are global in nature and present challenging aspects for modeling and analysis. In this talk I will discuss different perspectives for supply chain modeling, analysis, and computation based on centralized vs. decentralized decision-making behavior, along with suitable methodological frameworks. I will also highlight applications to mergers and acquisitions and even humanitarian logistics through supply chain network integration. Such timely issues as risk management, demand uncertainty, outsourcing, and disruption management in the context of our recent research on supply chain network design and redesign will also be discussed. Suggestions for new directions and opportunities in healthcare and sustainable supply chain networks will conclude this talk. I thank the organizers of CMS 2010 for giving me the opportunity to deliver this talk. On June 11, 2010, the 2010 World Cup began and that was the same day that we mailed a postcard to ourselves from Buenos Aires, Argentina, where I spoke at the ALIO-INFORMS conference. Remember that today is July 12, 2010, the day after the ending of this World Cup in South Africa. I am pleased to report that the postcard that we sent to ourselves (since we love to track how long deliveries can take) arrived today! It took 1 month and 1 day and, strangely enough, it had a side trip to Mexico. Perhaps it was because Argentina beat Mexico in this World Cup?! We started getting messages from relatives and friends in the past 10 days or so that our postcards to them from Buenos Aires were trickling in. I wish that there had been a GPS tracker on this card because I am sure that the journey that it took was fascinating from a transportation point of view. Above is what we are calling our infamous World Cup postcard. We whited out our address, which was present, so that was not the excuse for this extreme snail mail delivery. At least it did arrive, finally! Wherever you may have been this weekend, it was hard not to notice the excitement surrounding the 2010 World Cup final in South Africa. I was in cosmopolitan, gorgeous NYC this past weekend, which was filled not only with locals who weren't at the Hamptons, but also with many tourists from around the world. Yesterday, the scores during the Spain vs. The Netherlands final match were relayed from taxi driver to taxi driver (these we saw parked) and from tourist to tourist so one could always catch the score even while walking. As the German octopus named Paul "predicted," Spain beat The Netherlands and the final score was 1-0. Neither of these teams had ever advanced to the finals so it was a great achievement for both. The day before Germany beat Uruguay to receive third place in this 32 team World Cup. Thank you, South Africa, for a month of great sports in the form of the game of soccer that brought fans around the globe to focus on the exciting games in your country. It was the first time that the World Cup was hosted on the continent of Africa and South Africa should be congratulated for the success of this month-long sports event! Dr. James "Jim" Simons is an amazing man. He is a mathematician and a financier and the President of Renaissance Technologies, a well-known, very successful private investment firm and hedge fund. As the former chairperson of the Department of Mathematics at SUNY Stony Brook in the late 1960s to the mid-1970s, he built up this department to one of the top in the country. Since leaving SUNY Stony Brook, where his service and contributions were laudable enough, he and his wife have not stopped supporting that state university. His wife Marilyn, who has a PhD in economics from SUNY Stony Brook, has served since 1994 as President of the Simons Foundation, a charitable organization supporting researchers and institutions conducting advanced work in the basic sciences and mathematics, with a major emphasis on autism. In February 2008, the foundation gave $60 million to this state university, which is one of 64 institutions in this public university system. The press release can be read here. Interestingly, SUNY Stony Brook offered me a tenure-track Assistant Professorship in the Department of Applied Math and Statistics, after my interview there, while I was completing my PhD in Applied Math with a specialty in Operations Research at Brown University, but I declined. Our new Provost at UMass Amherst, Dr. James Staros, came from SUNY Stony Brook. James Simons is now one of the world's wealthiest men (see where knowing and loving mathematics can take you) and the Simons Center at SUNY Stony Brook carries his family's name. He is the son of a shoemaker from Massachusetts and received his degrees from MIT and UC Berkeley. The New York Times has an interesting and timely article on private donations to state universities. Today many state universities in the US are suffering from shortfalls of support from their respective states. Dr. Simons believes that donors do not want to support failing institutions and have their donations simply used as plugins and patchups for state funding gaps (or should I say, "gorges"). He is supporting/proposing different tuitions for individual state university institutions and this has resulted in some serious debates. You can read the article in the Times here. What Dr. Simons has done and continues to do in support of his State University, although he is not even an employee (nor a student there and not even an alum), is outstanding and laudatory. I wish that more would have his vision and his courage and that others would see and understand the essential role that public, in addition to private ones, universities play in our nation. I give annually to my state university and employer for a scholarship fund for students and the best thank you that I get is the excellence of our students and their joy in learning. One year ago, our book, Fragile Networks: Identifying Vulnerabilities and Synergies in an Uncertain World, was published and was recently noted by the Library Journal to be a top selling book in technology and engineering. 6 months ago, on January 12, 2010, a huge earthquake hit Haiti and awakened the world to the wrought devastation. I wrote regularly in this blog about the earthquake, the resulting human suffering and loss of lives, and the loss of critical infrastructure from the roads to telecommunications, plus hospitals and even schools, which only added to the suffering of the survivors. I also called for better coordination among the stakeholders and especially the humanitarian organizations for the provision of necessary supplies and decent logistics. Today, The New York Times has an OpEd piece written by my colleagues at Georgia Tech, which is right on target, and which speaks to one of the major themes of our Fragile Networks book: that the identification of the critical network links before (and after) their degradation and even ultimate devastation is essential. According to the OpEd piece, Haiti's External Weight, by Professors Desroches, Ergun, and Swann, 6 months after the earthquake: twenty million to 25 million cubic yards of debris fill the streets, yards, sidewalks and canals of Port-au-Prince — enough to fill five Louisiana Superdomes. Debris is one of the most significant issues keeping Haitians from rebuilding Port-au-Prince and resuming normal lives. Much of the stuff has been left in place or simply moved to the center or the sides of roads. Some streets with especially large piles of refuse are impassable. As a result, it can take hours to travel just a few miles. Meanwhile, schools, hospitals, businesses and homes remain blocked. Amazingly, only about 5% of the original debris has been properly disposed of and there are serious concerns about the ultimate impact of the debris on the environment, as well. Clearly, the efficiency and performance of the transportation and logistical networks in Haiti have been severely affected and degraded without a timely debris removal. Since such networks provide the infrastructure for the movement of people and goods, how can Haiti's economy and its citizens move forward?! In May, 2008, I had the privilege of convening a workshop: Humanitarian Logistics: Networks for Africa at the Rockefeller Foundation's Bellagio Center on Lake Como. It was apparent to us then, as it is now, that with the increasing number of disasters documented globally more attention to education, to research, and to policy analysis regarding humanitarian logistics and sustainable operations is sorely needed. Crises don't end once the media attention dissipates. I have been writing in this blog about our latest research on supply chain network design for critical needs products from vaccines to medicines and on hospital supply chains. I was delighted to read in the Boston Globe that Sanofi-Aventis, one of the world's largest drug makers, is expanding its operations in Massachusetts and that this will mean more jobs -- jobs of importance and meaning. According to the article: Sanofi-Aventis SA, is planning a $65 million expansion in Cambridge that will create about 300 jobs, making it the latest foreign pharmaceutical giant to invest in Massachusetts. The Paris-based drug maker is in the process of leasing space in Cambridgeport, where it will establish a joint headquarters for a new cancer division. As you may recall (what a feeling of deja vu) last summer, at this time, the world was battling the H1N1 virus from Argentina (coincidentally, my most recent international trip in the summer of 2010) to China to the United States. Sanofi-Aventis was one of only a handful of H1N1 vaccine producers and, given the low number of vaccine manufacturers, and the various challenges with the production of this vaccine, my research group began a study on multiproduct supply chain network design with applications to healthcare. Clearly, with fixed capacities, some of the vaccine producers had to switch from production of the annual flu vaccine to the challenging H1N1 vaccine. We are pleased that the study has resulted in a paper/report, "Multiproduct Supply Chain Network Design with Applications to Healthcare," which, given the timeliness of the topic, we are making available to our readers through the Virtual Center for Supernetworks. The research allows pharmaceutical firms to redesign their supply chains in an optimal way to enable the cost-minimizing production of multiple vaccines and medicines with demand satisfaction. Research at Business Schools is increasingly focusing not only on how wealth can be created but also how it can be done in a socially responsible way. Germany just lost to Spain with a brilliant goal header giving Spain a win with a score of 1-0. The final game will take place this Sunday in South Africa with The Netherlands playing Spain for the World Cup championship. This will be Spain's first World Cup final appearance. I studied both Spanish and German so did not favor either team. Amazing, though, to have 2 European soccer teams be the finalists of this great World Cup tournament in South Africa. The Netherlands Beats Uruguay in a Frenetic Finale! The Netherlands beat Uruguay with a score of 3-2 to advance to the finals of the 2010 World Cup. The game, as the announcer said, was "frenetic" and a heart-stopper. Now we have Germany vs. Spain tomorrow and then the winner of that game will play The Netherlands. Amazing to have a final of this World Cup of only European teams. To all of my colleagues and friends in The Netherlands, congratulations! The headers and athleticism of your soccer team have made your matches super exciting to watch at the 2010 World Cup in South Africa! History has already been made since no European team has ever won a World Cup outside of Europe! Although we are living in a world of "throwaways" we are seeing an exciting convergence of corporate social responsibility, green logistics, healthcare, and even humanitarian operations through the recycling, redesign, and reprocessing of medical products and associated medical waste, so there is HOPE! Interestingly, as The New York Times is reporting, in the article, "In a World of Throwaways, Making a Dent in Medical Waste," by Ingfei Chen, the biggest source of medical refuse is the operating room (O.R.), with 20-30% of a hospital's waste. A nonprofit group in VA, Practice Greenhealth, is now working on reducing the environmental footprint of health care institutions with its Greening of the O.R. initiative, which is focusing on identifying the best sustainable practices for reducing operating room garbage, energy consumption, and indoor air quality problems, while lowering expenses and improving safety -- all fantastic goals! Reducing the waste associated with medical supplies and equipment,which can be achieved through recycling and reprocessing, for example, can save on new purchases and can also reduce landfill fees and incineration costs. For example, according to the article, the Hospital Corporation of America, which owns 163 hospitals, eliminated 94 tons of waste last year through the reprocessing of medical supplies! I am reminded of the similarity between medical waste and recycling and reprocessing issues to that of electronic recycling, or e-cycling, a topic that I have written about in the past, with Dr. Fuminori Toyasaki. Our paper, "Reverse Supply Chain Management and Electronic Waste Recycling: A Multitiered Network Equilibrium Framework for E-Cycling," remains as one of the top cited papers in Transportation Research E. Dr. Ralph Pennino, the chief of plastic surgery at Rochester General Hospital in upstate New York, notes that surgeons have agreed to use standardized supply kits selected to cover most of their needs while leaving little unused, so that they can “work systems out so we don’t have anything to reprocess." This is said beautifully and speaks to the importance of designing health care supply chains and medical products accordingly, a topic that we have also been writing about, and where we specifically allow decision-makers to assign costs associated with oversupply/waste. Dr. Pennino notes that leftover items are donated to InterVol, a nonprofit organization started in 1989 by him. Each week, its volunteers gather about 8,000 pounds of unused supplies and reusable equipment from regional health care facilities, then ship the stock to clinics in more than two dozen countries, including Somalia and Haiti. This is an example of the best in green logistics, healthcare, and humanitarian operations! Do you know the value of the parts in your iPhone and what it took to put them all together? The way to calculate the various costs of the components plus the labor that is needed to manufacturer and assemble them so that consumers can acquire the latest hot product from Apple is to deconstruct the iPhone supply chain. At a cost of about $600, the iPhone 4, can be deconstructed by tracking its components from the dozen integrated chips and flash memory to the casings to the embedded GPS system. The hardware we can all see and these physical components contribute just under $200 to the price of the iPhone 4. According to an article in The New York Times, the smallest part of Apple’s costs are those in manufacturing and assembly, which takes place in China, right now in Shenzhen. The Chinese assembly-line workers put together the microchips, which come from Germany and Korea, a touch-screen module from Taiwan, even American-made chips that pull in Wi-Fi or cellphone signals, and more than 100 other components! This is truly a global supply chain with the greatest value to the product coming at the front-end and the back-end. In the iPhone 4, more than a dozen integrated circuit chips account for about two-thirds of the cost of producing a single device, according to iSuppli. However, soaring labor costs caused by worker shortages and unrest, a strengthening Chinese currency that makes exports more expensive, and other issues such as inflation and rising housing costs are all threatening to sharply increase the cost of making devices such as notebook computers, digital cameras and smartphones. Desperate factory owners in China are already shifting production away from this country’s dominant electronics manufacturing center in Shenzhen toward lower-cost regions, even in China’s mountainous interior. The world of contract manufacturers is invisible to consumers. According to The New York Times, it is a $250 billion industry, with only a few companies like Foxconn (which has been much in the news lately because of the worker strikes), Flextronics, and Jabil Circuit manufacturing and assembling for all the global electronics brands. These companies compete on price to earn small profit margins, analysts say. As we who work and conduct research in operations and supply chains know, such firms try to benefit from even minute operational changes to attain competitive advantage. For example, the Chinese companies have very low profit margins and increasingly the Chinese are not favoring low-end assembly work. At the same time, Foxconn is spending heavily on manufacturing many of the parts, molds and metals that are used in computers and handsets, even trying to find larger and cheaper sources of raw material and locating plants closer to mines with sources of necessary raw materials. So when you look at your smartphone, think about what went into your latest hightech gadget and favorite product, from the knowledge resources that helped to create its design, from the natural resources that enabled the construction of its components to the human resources that put the components together to the transportation services that delivered it to the stores. And remember, as you hold the product in your hands, which was made possible by the complex global supply chain that enabled its design, its manufacture, assembly, and distribution, the long journey that the product took to you. I am preparing my talks for the 2010 International Conference in Computational Management Science, which will take place in glorious Vienna, Austria, July 28-30, at the University of Vienna. I am very much looking forward to this conference. Besides giving the invited keynote talk, "Supply Chain Networks: Challenges and Opportunities from Analysis to Design," I will also be presenting, "Supply Chain Network Design for Critical Needs with Outsourcing," which is based on the paper forthcoming in the journal, Papers in Regional Science, with my doctoral student, Min Yu, and Professor Qiang "Patrick" Qiang of Pennsylvania State University in Malvern. Mr. Thomas Seyffertitiz of the Vienna University of Economics and Business (the largest business school in Europe) will be speaking on "Vulnerability and Disruption Analysis in Supply Chain Networks: A Layered Network Perspective;" Professor Patrizia Daniele of the Department of Mathematics and Computer Science at the University of Catania in Italy will present: "Supply Chain Networks and Infinite Dimensional Duality Theory." The list of invited keynote speakers can be found here. The program with accepted talks and schedule can be downloaded here. There is nothing like a small town parade and today we enjoyed the 4th of July parade in Amherst, Massachusetts. The photos above were taken in downtown Amherst, where as the residents always say, only the "h" is silent. Tonight we are looking forward to the extravagant fireworks display which takes place annually. Rather than having 3 South American teams in the final four of the 2010 World Cup in South Africa, as many were predicting, we have 1 team from South America left, from Uruguay (who beat Ghana yesterday with penalty kicks), and three European teams: The Netherlands (who beat Brazil 2-1), Germany (who beat Argentina 4-0 today), and Spain (who beat Paraguay today 1-0). Next week, Uruguay will play against The Netherlands, and Spain will play against Germany. The New York Times had a wonderful article on Diego Maradona, the coach of the Argentinian team, which, I suspect, he may even have time now to read. I heard that his top player, Lionel Messi, was not feeling well on Thursday and missed practice (perhaps that affected his team's performance today). Congratulations to all the teams who played their hearts out at this World Cup. From my perspective as a spectator and a huge World Cup fan, they all deserve standing ovations. I caught the last half hour or so of the Brazil vs. The Netherlands World Cup soccer game and my heart is still beating fast. The goal scored by the Dutch team using three players' heads as though they were playing a volleyball game with their heads was breathtaking. For Holland to have beaten the number 1 ranked team, Brazil, at the 2010 World Cup speaks to the continuing drama and surprises taking place at this fascinating World Cup in South Africa. Thanks to Brazil and to its players, coaches, and fans for bringing their special skills in soccer to the world stage, despite the loss today. I hear from my academic colleagues in The Netherlands that the country is experiencing a very serious case of soccer frenzy, and justifiably so. I send my condolences to my colleagues in Brazil for their team's loss today which must be very painful since it was so unexpected. In an earlier blog post, I wrote about the speculation as to whether (or not) Grigoriy Perelman would accept the $1,000,000 math prize from the Clay Institute. According to the Clay Institute of Cambridge, Massachusetts (and this news was even reported in our local paper), Dr. Perelman, of St. Petersburg, Russia, has decided to decline the Millennium Prize for his contribution to solving the Poincare conjecture. The Institute President, Jim Carlson, was quoted as saying that Perelman's decision was not a complete surprise, since he had declined some previous math prizes. Dr. Perelman told Interfax that he considered his contribution to solving the Poincare conjecture no greater than that of the Columbia University mathematician, Richard Hamilton. In our local paper, the Daily Hampshire Gazette, it is stated that the Clay Institute officials will meet this Fall to decide what to do with the million dollars with Carlson saying,"We have some ideas in mind." "We want to consider that carefully and make the best use possible of the money for the benefit of mathematics." May I suggest that the money be used for scholarships and fellowships for deserving students and researchers in mathematics and associated technical subjects? The Pew Research Center has released another study through its Global Attitudes Project in conjunction with The International Herald Tribune, which is reported on in The New York Times, and which will surely generate provocative discussions. According to the study, which surveyed people in 22 different countries, people say that they firmly support equal rights for men and women, but many still believe that men should get preference when it comes to good jobs, higher education or even in some cases the simple right to work outside the home. The poll, conducted in April and May, suggests that in both developing countries and wealthy ones, there is a pronounced gap between a belief in the equality of the sexes and how that translates into reality. In nations where equal rights are already mandated, women seem stymied by a lack of real progress, the poll found. Several quotes in The New York Times article, by female professors, especially resonated with me. Professor Herminia Ibarra, who teaches organizational behavior at INSEAD, the international business school based in Fontainebleau, France, is quoted as saying: There are still very few women running large organizations, and business culture remains resolutely a boys’ club. And a quote by Professor Jacqui True, who teaches at the University of Auckland in New Zealand, stated: When you’re left out of the club, you know it. When you’re in the club, you don’t see what the problem is. Clearly, women, as professionals, may belong to several "clubs," in which they can be valued members (or not) -- from their immediate departments in their academic or employer organizations to different professional organizations or societies and communities. What is essential is that professionals understand and take part in activities of organizations that are broader in scope than simply their local ones. In this sense, they can obtain not only sustenance in being part of larger communities but they can also engage in life long learning opportunities and build relationships that they can rely upon when the "going gets tough." Women's professional voices are being increasingly heard -- through podcasts and even through blogs and social networking sites. I would like to single out, as an example, the learning resources of INFORMS, one of the professional societies that I belong to, which through the extraordinary efforts of Barry List, has put together a series of podcasts of wide interest and relevance. You will see that several podcasts are interviews with female experts. If you would like to see the full list in digital format, just click here. Students and future generations need to see many different role models in terms of gender and race since the myriad problems that the world is facing from the environment to inequality to wars and strife cannot be addressed through the eye of a needle.
2019-04-24T04:18:35Z
http://annanagurney.blogspot.com/2010/07/
A method of treating patients for compulsive overeating includes stimulating left and right branches of the patient's vagus nerve simultaneously with electrical pulses in a predetermined sequence of a first period in which pulses are applied continuously, alternating with a second period in which no pulses are applied. The electrical pulses are preferably applied to the vagus nerve at a supradiaphragmatic location. The present invention relates generally to methods and apparatus for treating eating disorders by application of modulating electrical signals to a selected cranial nerve, nerve branch or nerve bundle, and more particularly to techniques for treating patients with overeating disorders, and especially obese patients by application of such signals bilaterally to the patient's vagus nerve with one or more neurostimulating devices. Increasing prevalence of obesity is one of the most serious and widespread health problems facing the world community. It is estimated that currently in American 55% of adults are obese and 20% of teenagers are either obese or significantly overweight. Additionally, 6% of the total population of the United States is morbidly obese. Morbid obesity is defined as having a body mass index of more than forty, or, as is more commonly understood, being more than one hundred pounds overweight for a person of average height. This data is alarming for numerous reasons, not the least of which is it indicates an obesity epidemic. Many health experts believe that obesity is the first or second leading cause of preventable deaths in the United States, with cigarette smoking either just lagging or leading. A recent study from the Kaiser HMO system has demonstrated that morbid obesity drastically increases health care costs (Journal of the American Medical Association (JAMA)). It is the consequences of being overweight that are most alarming. Obesity is asserted to be the cause of approximately eighty percent of adult onset diabetes in the United States, and of ninety percent of sleep apnea cases. Obesity is also a substantial risk factor for coronary artery disease, stroke, chronic venous abnormalities, numerous orthopedic problems and esophageal reflux disease. More recently, researchers have documented a link between obesity, infertility and miscarriages, as well as post menopausal breast cancer. Despite these statistics, treatment options for obese people are limited. Classical models combining nutritional counseling with exercise and education have not led to long term success for very many patients. Use of liquid diets and pharmaceutical agents may result in weight loss which, however, is only rarely sustained. Surgical procedures that cause either gastric restriction or malabsorption have been, collectively, the most successful long-term remedy for severe obesity. However, this type of surgery involves a major operation, can lead to emotional problems, and cannot be modified readily as patient needs demand or change. Additionally, even this attempted remedy can sometimes fail (see, e.g., Kriwanek, “Therapeutic failures after gastric bypass operations for morbid obesity,” Langenbecks Archiv. Fur Chirurgie, 38(2): 70-74, 1995). It is difficult to document many cases of long term success with dietary counseling, exercise therapy and behavioral modification. The introduction of pharmacologic therapy may help improve these results; however, to date pharmacologic remedies have not been able to document long term success. In addition, the chronic use of these drugs can lead to tolerance, as well as side effects from their long term administration. And, when the drug is discontinued, weight returns. To date, surgical procedures such as gastric bypass or vertical banded gastroplasty have demonstrated the best long term success in treating people with morbid obesity. However, these operations are highly invasive and carry risks of both short and long term complications. Additionally, such operations are difficult to modify, and cannot be regulated up or down if the clinical situation changes. As a result, a pressing need currently exists for better treatment options for obesity. The long-term failure of liquids and pharmaceuticals aptly demonstrates a need for a life-long control mechanism. A perfect treatment would be adjustable and could be regulated as needed. It would need to be with the patient at all times. The applicants herein are convinced that vagal nerve stimulation has the potential to meet those requirements as a safe and effective treatment for obesity, through an extension of the vagal stimulation technique disclosed in U.S. Pat. No. 5,263,480 to J. Wernicke et al., assigned to the same assignee as the present application. The '480 patent discloses that treatment for eating disorders in general, and obesity and compulsive overeating disorder in particular, may be carried out by selectively applying specially adapted modulating electrical signals to the patient's vagus nerve by a neurostimulator which is preferably totally implanted in the patient, but may alternatively be employed external to the body or even percutaneously. The modulating signals themselves may be stimulating or inhibiting with respect to the electrical activity of the vagus nerve, but for purposes of that patent, both cases were sometimes included within the term “stimulating”. In essence, stimulation of vagal activity could cause more neural impulses to move up the nerve whereas inhibition of vagal activity could block neural impulses from moving up the nerve. The modulating signals can be used to produce excitatory or inhibitory neurotransmitter release. According to the '480 patent, strategies for vagal modulation, including adjusting the parameters for electrical stimulation of the vagus nerve, nerve fibers or nerve bundle, depend on a number of factors. Among these are considerations of which part(s) of the nerve or the nerve fibers are to be subjected to the modulating signals; whether the patient experiences a “feeling” or sensation at the onset of the disorder or a symptom of the disorder which can be used to activate the neurostimulation generator or, alternatively, a physiologic signal is generated which can be detected and employed to trigger the modulation; and/or whether a “carryover” or refractory period occurs after modulation in which the benefit of the modulation is maintained. Further, for example, appropriate setting of pulse width and amplitude of the stimulating (modulating) signal at the output of the neurostimulator, applied via electrode(s) to the vagus nerve, might allow particular fibers of the nerve to be selectively stimulated. Also, the precise signal pattern to be used, such as the length of the time intervals in which the signal is on and off, might be adjusted to the individual patient and the particular eating disorder being treated. In treatment of obesity, the '480 patent hypothesized that vagal stimulation could be used to produce appetite suppression by causing the patient to experience satiety, a sensation of “fullness,” which would naturally result in decreased intake of food and consequent weight reduction. In effect, the brain perceives the stomach to be full as a result of the treatment. In a then-preferred embodiment of the invention disclosed in the '480 patent for treating patients with compulsive overeating/obesity disorders, an implantable neurostimulator included a signal generator or electronics package adapted to generate an electrical output signal in the form of a sequence of pulses, with parameter values programmable by the attending physician within predetermined ranges for treating the disorder, and a lead/electrode system for applying the programmed output signal to the patient's vagus nerve. Calibration of the overall treatment system for a particular patient was to be performed by telemetry by means of an external programmer to and from the implant. The implanted electronics package might be externally programmed for activation upon occurrence of a predetermined detectable event, or, instead might be periodically or continuously activated, to generate the desired output signal with parameter values programmed to treat obesity by modulating vagal activity so as to produce a sensation of satiety. In alternative embodiments of the invention disclosed in the '480 patent, the stimulus generator or electronics package might be located external to the patient, with only an RF coil, rectifier and the lead/nerve electrode assembly implanted; or with the lead implanted percutaneously through the skin and to the nerve electrode. The latter technique was least preferred because special precautions would be needed to avoid possible infection via the path from outside the body to the nerve along the lead. In a preferred method of use according to the '480 patent, the stimulus generator of the neurostimulator is implanted in a convenient location in the patient's body, such as in the abdomen in relatively close proximity to the stimulating electrode system and, if applicable, to the detecting system. For treating compulsive overeating and obesity, it might be desirable to ascertain the patient's food intake, i.e., the quantity of food consumed, for example by means of implanted sensing electrodes in or at the esophagus to detect passage of food as the patient swallowed. The swallows could be summed over a preselected time interval to provide an indication or estimate of the amount of food consumed in the selected interval. Modulation of vagal activity would then be initiated if the summation exceeded a predetermined threshold level. In the preferred embodiment of the '480 patent, the stimulating electrode (nerve electrode e.g., a cuff) would be implanted about the vagus nerve or a branch thereof in the esophageal region slightly above the stomach, and the vagal stimulation applied to produce or induce satiety. As a result, the patient would experience a satisfied feeling of fullness at a level of consumption sufficient to maintain physiologic needs but supportive of weight reduction. In another method according to the '480 patent, the appropriately programmed output signal of the neurostimulator is applied periodically to modulate the patient's vagus nerve activity, without regard to consumption of a particular quantity of food, except perhaps at prescribed mealtimes during normal waking hours according to the patient's circadian cycle. The intent of such treatment was to suppress the patient's appetite by producing the sensation of satiety between normal mealtimes. Alternatively, or in addition to either or both of automatic detection of the event and activation of the signal generation in response thereto, or intermittent or sustained activation according to the circadian cycle, the neurostimulator electronics package could be implemented for manual activation of the output signal by the patient, as by placement of an external magnet over the implanted device (to close a switch), or by tapping the region over the device (to cause it to respond to the sound or vibration), or by use of an RF transmitter, for example. Manual activation would be useful in situations where the patient has an earnest desire to control his or her eating behavior, but requires supportive measures because of a lack of sufficient will power or self-control to refrain from the compulsive behavior, such as binge eating or simply overeating, in the absence of the neurostimulation device. The vagus nerve is the dominant nerve of the gastrointestinal (GI) tract (see, e.g., Berthoud et al., “Morphology and distribution of vagal afferent innervation of rat gastrointestinal tract,” Soc. Neurosci. Abstr., 17(2), 1365, 1991). A right and a left vagus connect the GI tract to the brain. After leaving the spinal cord, the vagal afferents transport information regarding the GI tract to the brain. In the lower part of the chest, the left vagus rotates, becomes the anterior vagus, and innervates the stomach. The right vagus rotates to become the posterior vagus, which branches into the celiac division and innervates the duodenum and proximal intestinal tract. While the vagus is often thought of as a motor nerve which also carries secretory signals, 80% of the nerve is sensory consisting of afferent fibers (see, e.g., Grundy et al., “Sensory afferents from the gastrointestinal tract,” Handbook of Physiology, Sec. 6, S.G., Ed., American Physiology Society, Bethesda, Md., 1989, Chapter 10). While the exact mechanisms that make us feel full are still being determined, much information has been accumulated. Satiety signals include the stretch of mechanoreceptors, and the stimulation of certain chemosensors (“A Protective Role for Vagal Afferents: An Hypothesis. ” Neuroanatomy and Physiology of Abdominal Vagal Afferents, Chapter 12. CRC Press, 1992). These signals are transported to the brain by the nervous system or endocrine factors such as gut peptides (“External Sensory Events and the Control of the Gastrointestinal Tract: An Introduction. ” Neuroanatomy and Physiology of Abdominal Vagal Afferents. Chapter 5. CRC Press 1992). The role of vagal afferents in the transmission these signals has been demonstrated by numerous studies. Ritter et al. has demonstrated that direct infusion of maltose and oleic acid into the duodenum of rats leads to a reduction in oral intake. This response is ablated by vagotomy or injection of capsaicin, which destroys vagal afferents. Similarly, systemic cholecystokinin has been demonstrated to reduce intake in rats. This response is also ablated by destruction of vagal afferents. A plethora of literature makes it clear that vagal afferent fibers are an important source of information to the brain regarding the quantity and quality of the ingests. The present invention is based on the applicants' study of particular methods and techniques of vagus nerve stimulation after numerous studies that have indicated the vagus to be an important nerve transporting satiety signals from the gut to the brain. Studies in rat models have demonstrated that the vagus nerve is the “information superhighway” for conducting signals from agents such as cholecystokinin and enterostatin. It remains to be determined whether and how such signals could be mimicked by using vagal nerve stimulation. Greater attention to use of vagal stimulation in treating obesity is also prompted in part by the knowledge that vagal nerve stimulation has been shown to be safe and effective when used long-term to treat epilepsy. That is to say, the regimen in studies involving use of vagal stimulation techniques to treat obesity would not involve the extreme measures or short-and long-term side effects on the patient that have characterized treatment methods of the type described above in the background section. According to the present invention, a method of treating patients for obesity includes performing bilateral stimulation of the patient's vagus nerve by applying a stimulating electrical signal to the right and left vagi, wherein the parameters of the signal are predetermined to produce a sensation of satiety in the patient. The signal could be applied synchronously to the right and left vagi or asynchronously. The stimulating electrical signal is preferably a pulse signal which is applied intermittently to the right and left vagi according to the duty cycle of the signal (i.e., its on and off times). Also, the intermittent application of the stimulating electrical signal is preferably chronic, rather than acute. Nevertheless, it is possible that the bilateral stimulation could be delivered continuously to the right and left vagi to achieve some success in the treatment, and/or that acute application might suffice in some circumstances. Also, it is conceivable that the stimulating electrical signal applied acutely to the right and left vagi during a customary mealtime, or from a short time preceding and/or following the mealtime, according to the patient's circadian cycle, could be somewhat effective in certain cases. Although an automatic delivery of bilateral intermittent stimulation is preferred, it is also possible that application of the stimulating electrical signal to the right and left vagi might be controlled by an external commencement signal administered by the patient, as by use of an external magnet brought into proximity with the implanted device. In general, the same stimulating electrical signal is applied to both the right and left vagi, but it may also be possible to apply a different stimulating electrical signal to the right vagus from the stimulating electrical signal applied to the left vagus. Further, although two separate nerve stimulator generators may be implanted for stimulating the left and right vagi, a single nerve stimulator generator may be implanted for bilateral stimulation if the same signal is to be applied to both the left and right branches of the vagus, whether delilvered synchronously or asynchronously to the vagi. Preferably, the stimulating electrical signal is applied at the supradiaphragmatic position of the left and right vagi. Also, the stimulating signal is characterized by a current magnitude below a predetermined physiological response to stimulation called the retching level of stimulation of the patient. This is to assure that the patient will not suffer from nausea during the periods of vagus nerve stimulation. In summary, then, the most preferred method of treating patients for obseity, includes stimulating the left and right branches of the patient's vagus nerve simultaneously with electrical pulses in a predetermined sequence of a first period in which pulses are applied continuously, alternating with a second period in which no pulses are applied, and in which the electrical pulses are applied to the vagus nerve at a supradiaphragmatic location. The pulses preferably have an electrical current magnitude not exceeding about 6 ma, but in any event, the magnitude is preselected to be less than the level that would induce retching in the patient as determined at the time of the initial implant(s). The pulse width is adjusted to a value not exceeding about 500 ms, and the pulse repetition frequency is set at about 20-30 Hz. The second period is preferably about 1.8 times as long as the first period in the alternation of application of the stimulating pulses (i.e., the on/off duty cycle is at a ratio of 1:1.8). The pulse parameters including on time and off time are programmable by the implanting physician, using an external programmer. Apparatus according to the invention for treating patients suffering from obesity eating disorder includes implantable neurostimulator device means for simultaneously stimulating left and right branches of the patient's vagus nerve with electrical pulses in a predetermined sequence of a first period in which pulses are applied continuously, alternating with a second period in which no pulses are applied; and electrode means for implantation on the right and left branches in a supradiaphragmatic position. Accordingly, it is a principal objective of the present invention to provide improvements in methods and apparatus for treating and controlling overeating disorder, especially in obese patients. It is a more specific aim of the invention to provide methods of treating and controlling compulsive overeating and obesity by bilateral intermittent pulse stimulation of the right and left vagi at a supradiaphragmatic position in the patient. The sole FIGURE is a simplified fragmentary illustration of the stimulus generator and lead/electrode system of the neuro stimulator implanted in the patient's body. A generally suitable form of neurostimulator for use in the apparatus and method of the present invention is disclosed in U.S. patent application Ser. No. 07/434,985, filed Nov. 13, 1989 in the names of Reese S. Terry, Jr., et al. (referred to herein as “the '985 application”), now U.S. Pat. No. 5,154,172 assigned to the same assignee as the instant application. The specification of the '985 application is incorporated herein in its entirety by reference. According to the present invention, the patient is treated with bilateral stimulation of the right and left vagi branches at the supradiaphragmatic position of the vagus nerve, using neurostimulators (e.g., the NCP® generator available from Cyberonics, Inc. of Houston, Tex. (Cyberonics)) placed, for example, via a left anterior thoracic incision. A standard Cyberonics Bipolar Lead nerve electrode, for example, is attached to the nerve generator after the patient's eating behavior is standardized and a stable dietary pattern is observed. In dog tests conducted by the applicants herein, the dietary pattern included twice-a-day feedings of approximately 400 grams of solid food with one scoop of soft meat product added to make the food more edible. During the surgical procedure, a threshold referred to herein as the retching threshold was documented while the animal was under anesthesia, based on the threshold value of the stimulus output current of the device at which the animal exhibited a retching or emetic response. The amount of current was adjusted to determine this threshold. Other parameters were left fixed at a frequency of 30 Hertz (Hz), a pulse width of 500 milliseconds (ms), and an on/off cycle of one minute on and 1.8 minutes off. Following the implant of the bilateral nerve stimulators, the animals were allowed to stabilize. Once eating behavior returned to preoperative levels the vagal nerve stimulators were turned on in two canines. These two were given chronic intermittent bilateral nerve stimulation over a twenty-four hour period. Initial amplitude was set at approximately 1.0 to 1.5 milliamperes (mA) below the retching threshold, and adjusted thereafter. The retching thresholds in mA increased over a period of days. Both chronic dogs behaved in the same manner. Initially there was no change in the eating behavior. Approximately seven to ten days later, while still being subjected to chronic intermittent bilateral nerve stimulation, eating behavior changed in both dogs. They demonstrated a lack of enthusiasm for their food, while maintaining normal behavior for all other aspects of laboratory life. Instead of consuming their meal in approximately five minutes, as had been their customary preoperative behavior, their meal consumption took between fifteen and thirty minutes. More striking was the observed manner in which they consumed the food; each of the two would eat a small portion, leave the food dish, walk around, and ultimately return to the food from what appeared to be more a case of instinct than desire. To make certain a real effect attributable to the bilateral stimulation was being observed, after a six week period in which the intermittent stimulation was maintained, and consistent, altered eating behavior of the dogs continued, the stimulation was turned off. A of remarkable change in eating behavior was observed in each dog in one week after stimulation was discontinued, each dog exhibiting a return to its normal eating pattern after a few to several days in which it enthusiastically consumed its entire meal. Then, both stimulators were turned back on to provide the chronic intermittent bilateral stimulation in each animal, and the eating pattern of the animal slowed once again after approximately 10 to 15 days to what had been observed in the postoperative period following such stimulation. Further study was performed to determine whether unilateral stimulation would suffice, and whether a difference could be discerned between stimulation of the right vagus versus the left vagus. With only the left nerve stimulator turned for intermittent stimulation over a period of several days, no slowing in the animal's eating behavior was observed. The left stimulator was then turned off, and the latter testing was duplicated, this time using only right vagus nerve stimulation. Once again, after a period of several days of unilateral intermittent stimulation, no slowing of the animal's eating behavior was observed. Finally, both nerve stimulator generators were turned back on and, after a period of several days of the bilateral stimulation, each of the animal's eating behavior reverted to the slowed pace that had been observed in the postoperative period following such stimulation. The applicants postulate that these tests demonstrate that bilateral chronic intermittent stimulation is effective to change eating behavior in animals, and this same treatment is expected to be effective in changing eating behavior in obese human patients and in human patients suffering from compulsive overeating disorder, whether or not the patient is obese in the more strict sense of that term. Moreover, the testing further demonstrated by use of acute as well as chronic stimulation that a positive response of satiety was the cause of the lack of interest of the animals in food, rather than a negative response of nausea or sick stomach. In the acute testing protocol the animals were not subjected to bilateral stimulation of the vagi until fifteen minutes to one half hour before feeding time, and throughout the meal. Such acute bilateral stimulation failed to change the eating behavior of the animals from normal baseline eating pattern to a demonstrably slowed eating pattern—change that would have been expected to occur if the stimulation had the effect of producing nausea. These tests tend to show that the slowed eating and apparent disinterest in food consumption is centrally mediated and the result of producing a sensation of satiety mimicking that which would occur after consumption of a full meal. The characterization of the bilateral stimulation as being “intermittent” is made in the sense that the stimulation was performed following a prescribed duty cycle of application of the signal. The latter is a pulse signal, and is applied with a prescribed or preset or predetermined on-time of the pulses, followed by a prescribed or preset or predetermined off-time of the pulses, which could be the same as but in general is different from the on-time. It is possible, however, depending upon other parameters of the electrical pulse signal, that a continuous signal might be effective to produce the slowed eating behavior. It is also possible to use a single implanted nerve stimulator (pulse generator) with appropriate duty cycle to provide the bilateral stimulation of both vagal branches, right and left. Or the stimulation may be different for each branch and use different implanted stimulators. And although implanted stimulators are preferred, it is also possible to treat patients receiving clinical or in-hospital treatment by means of external devices that provide vagal stimulation via leads and electrodes implanted in the patient. Wholly implanted devices are preferred, however, because they allow patients to be completely ambulatory, and without interfering with routine daily activities. Two other dogs with bilateral stimulators were studied in a different fashion. Initially their stimulators were left off (inactive), and were only turned on just prior to challenging the animal with food, that is, a few minutes before the meal, and during the meal. No effect on eating behavior was observed in response to such acute bilateral vagus nerve stimulation. That is, each dog followed its normal or baseline preoperative eating behavior without noticeable or perceptible slowing. Some differences from stimulator to stimulator in magnitude of current in the pulses of the electrical stimulation signal may be observed, and may be attributable to things such as patient impedance, variation of the vagus nerve from right to left or between patients, and variation in contact between the vagus and the electrode implanted thereon from implant to implant. Although certain preferred embodiments and methods of treating and controlling eating disorders through vagal modulation according to the invention have been described herein, it will be apparent to those skilled in the field from a consideration of the foregoing description that variations and modifications of such embodiments, methods and techniques may be made without departing from the true spirit and scope of the invention. Accordingly, it is intended that the invention shall be limited only to the extent required by the appended claims and the rules and principles of applicable law. performing bilateral stimulation of the patient's vagus nerve by applying a stimulating electrical signal directly and intermittently to the right and left vagi, wherein the parameters of said signal are predetermined to produce a sensation of satiety in the patient. 2. The method of claim 1, including the step of applying said stimulating electrical signal intermittently to the right and left vagi during a customary mealtime according to the patients circadian cycle. 3. The method of claim 1, including the step of applying said stimulating electrical signal intermittently to the right and left vagi upon delivery of an external commencement signal administered by the patient. 4. The method of claim 1, including the step of applying the same stimulating electrical signal intermittently to both the right and left vagi. 5. The method of claim 1, including using separate nerve stimulator generators for intermittently stimulating the left and right vagi. 6. The method of claim 1, including the step of applying said stimulating electrical signal supra diaphragmatically to the left and right vagi. 7. The method of claim 1, wherein said stimulating electrical signal is characterized by a current magnitude below a predetermined retching level. 8. The method of claim 1, wherein said stimulating electrical signal is a pulse signal having a prescribed on-off duty cycle. 9. The method of claim 8, including the step of applying said stimulating electrical signal continuously to the right and left vagi so that pulses are applied during the on portion of said duty cycle and not during the off portion of said duty cycle. 10. The method of claim 9, including using separate nerve stimulator generators for stimulating the left and right vagi. 11. The method of claim 9, including the step of applying said stimulating electrical signal supra diaphragmatically to the left and right vagi. 12. The method of claim 9, wherein one of said parameters of said stimulating electrical signal is a pulse current magnitude below a predetermined level at which the signal tends to produce retching in the patient. 13. The method of claim 9, wherein said pulse signal has a pulse current magnitude in a range up to about 6 ma. 14. The method of claim 13, wherein said pulse signal has a pulse width in a range up to about 500 ms. 15. The method of claim 14, wherein said pulse signal has a repetition frequency of about 30 Hz. 16. The method of claim 15, wherein said pulse signal has a duty cycle with a ratio of on to off of about 1:1.8. 17. The method of claim 1, wherein said electrical signal is applied synchronously to the right and left vagi. bilaterally stimulating the patient's vagus nerve by chronically applying a stimulating electrical signal intermittently to the right and left vagi, the parameters or said signal being selected to produce a sensation of satiety in the patient. bilaterally stimulating The patient's vagus nerve by applying a stimulating electrical signal intermittently to the right and left vagi from said implanted separate nerve stimulator generators, the parameters of said signal being selected to produce a sensation of satiety in the patient. bilaterally stimulating the patient's vagus nerve by applying a stimulating electrical signal intermittently to the right and left vagi from said implanted nerve stimulator generator apparatus, the parameters of said signal being selected to produce a sensation of satiety in the patient. bilaterally stimulating the patient's vagus nerve by applying a stimulating electrical signal in the form of a pulse signal having a prescribed on-off duty cycle continuously to the right and left vagi from said implanted separate nerve stimulator generators, so that pulses are applied during the on portion of said duty cycle and not during the off portion of said duty cycle, the parameters of said signal being selected to produce a sensation of satiety in the patient. directly stimulating the left and right branches of the patient's vagus nerve simultaneously with electrical pulses in a predetermined sequence of a first period in which pulses are applied continuously, alternating with a second period in which no pulses are applied. 23. The method of claim 22, including the step of applying said electrical pulses to the vagus nerve at a supradiaphragmatic location. 24. The method of claim 23, wherein said pulses have an electrical current magnitude not exceeding about 6 ma. 25. The method of claim 24, wherein said electrical current magnitude is preselected to be less than a level that induces retching in the patient. 26. The method of claim 25, wherein said pulses have a width not exceeding about 500 ms. 27. The method of claim 26, wherein said pulses have a repetition frequency of about 30 Hz. 28. The method of claim 27, wherein said second period is 1.8 times as long as said first period. Brownell, K. et al., ed., Eating Disorders and Obesity, Guilford Press ( ), 3-7. Emond, M., et al., Central leptin modulates behavioral/neural responsivity etc., Am. J. Physiol. 276 (1999), R1545-49. Esfahani, N. et al., Inhibition of Serotonin Synthesis Attenuates etc., Pharmac Biochem Behavior 51 (1995), 9-12. Foltin, R. et al., Food Intake in Baboons: Effects of Long-acting Cholecystokinin Analog, Appetite (1989), 145-152. Hoebel, B., Brain neurotransmitters in food and drug reward, Am. J. Clin. Nutr 42 (1985), 1133-1150. Iggo, A., Gastric Mucosal Chemoreceptors with Vagal Afferent Fibres etc., Univ. of Edinburgh (1957), 398-409. Levin, B. et al., Role of the brain in energy balance and obesity, Am. J. Physiol. 271 (1996), R491-R500. Matson, C., et al., Long-term CCK-leptin synergy suggets role for CCK etc., Am. J. Physiol. 276 (1999), R1038-45. Moran, T. et al., Blockade of type A, but not type B, CCK receptors etc., Am. Physiol. Soc. (1993), R620-R624. Moran, T. et al., Potent and sustained satiety actions of cholecystokinin etc., Am. J. Clin. Nutr 55 (1992), 286S-90S. Pierson, M., Synthesis and Biological Evaluation of Potent etc., J. Med. Chem. 40 (1997), 4302-4307. Ritter S. et al., ed., Neuroanatomy and Physiology of Abdominal Vagal Afferents, CRC Press ( ), 221-248. Rohner-Jeanrenaud, F. et al., A Role for the Vagus Nerve in the Etiology etc., Int'l. J. Obesity (1985), 71-75. Stacher, G. et al., Cholecystokinin Decreases Appetite and Activation etc., Brain Research Pub. (1979), 325-331.
2019-04-19T10:44:51Z
https://patents.google.com/patent/US6587719B1/en
1) This paper is a Nigerian Peer Review paper, which will be presented at FIG Working Week 2013 -6-10 May, in Abuja, Nigeria. We are pleased to share this Peer Review paper with you already now prior the conference to highlight one of the challenges that Nigerian surveyors are dealing with, namely land access restrictions. Together with UNEP, the authors have undertaken a comprehensive environmental survey of several communities in the Niger Delta region, and their findings and methods are interesting not only in Nigeria but can be used in countries all over the world. At the conference you will be presented to many further papers both from Nigeria, Africa, and Internationally, that highlight the current challenges for surveyors. Environmental surveys that require access to communal, family and individual farmlands, mangrove swamps or fishing villages to obtain data, can be very challenging to any team of environmental professionals working locally or on international development related projects. Land access restrictions may be imposed by different interest groups or stakeholders whose actions could interfere with the overall conduct or success of any environmental survey irrespective of its laudable goals and objectives. It might also be that the traditional land tenure patterns may differ significantly from land tenure patterns as understood by a multicultural project management team. In Nigeria, following a Federal Government invitation in 2006, the United Nations Environment Programme (UNEP) undertook a comprehensive environmental survey of several communities in the Niger Delta region following reported and documented high levels of hydro-carbon pollution in these areas . Using an innovative and culture-based community entry and land access strategy developed by the UNEP project management team in a collaborative partnership with the Rivers State University of Science and Technology (RSUST), this paper presents the key considerations in this innovation and highlights the challenges encountered in the practical implementation of several key stages of the land access strategy . It documents real-life challenges as they were experienced in the field and which serves as feedback to the process and produces refinement and adaptation options for replication in similar environmental studies. Environmental surveys can range from very simple projects involving the investigation of a single site to large and more complex projects involving multiple locations and investigating multiple environmental media. In very simple description, an environmental assessment project will involve a preliminary historical and literature review on the study area, minor or major fieldwork and sampling followed by laboratory analysis and report production. Being in the nature of a project with set objectives, it is expected that the fundamental project management principles and procedures will apply and that predetermined goals and targets will be met. Land is an asset of enormous importance for billions of rural dwellers in the developing world, and especially in ACP countries where land is not just an economic asset, but has strong political, social, cultural, and spiritual dimensions (Boto, Peccerella, & Brasesco, 2012, p. 5). However, in real life situations as in the case of the UNEP Ogoniland project, a combination of traditional and innovative project management strategies had to be used in order to achieve overall success of the project. One of such innovations amongst several others was the use of a culture-based land access strategy. UNEP acknowledges that the two year study of the environment and public health impacts of oil contamination in Ogoniland is one of the most complex on-the-ground assessments ever undertaken by UNEP. (UNEP, 2011, p. 8) This assertion is quite significant judging by published statistics on the number of community and town-hall meetings that were held throughout the life of the project. The RSUST driven Land Access strategy was implemented by a land Access Team (LAT) made up of academics and student interns drawn from the departments of Estate Management; Urban and Regional Planning and Land Surveying in collaboration with Academics from the departments of Estate Management and Geo-informatics at the Rivers State Polytechnic (RIVPOLY) in Bori. This innovative strategy went through several iterative phases and refinement throughout the implementation and review of daily feedback from the UNEP technical teams in the field. The process however achieved reasonable success in meeting its set objectives and can be used or adapted for use in similar development and environmental assessment projects. A project is essentially a way of organising people, and a way to manage tasks. The British Standards definition BS 6079 – 1 defines a project as a unique set of coordinated activities, with a definite starting and finishing point, undertaken by an individual or organization to meet specific objectives within defined schedule, cost and performance parameters. Project management is simply a style of coordinating and managing work. What differentiates it from other styles of management is that it is totally focused on a specific outcome and when this outcome is achieved, the project ceases to be necessary and the project is stopped (Newton, 2009, p. 11) Projects can be categorized by their content, complexity and scale. Complexity can be assessed by either being risky, novel or intellectually complex. Project management has been defined severally but in terms of real life applications it means different things to different people and disciplines. A project management team is however responsible for determining what is appropriate for any given project. The PMBOK Guide (PMI, 2010)definition, meaning and theories of project management provide a general framework for this review. It defines a project as a temporary endeavour undertaken to create a unique product, service or result, and as such has a definite beginning and definite end (PMI, 2010, p. 4). Achieving a project’s objectives signals the end of a project unless it has to be terminated. Deliverables are expected at the end of each project or sub-project components of a main project. These would usually be in the form of products, services or results usually documented. It generally assumes a structured approach to projects but in the case where an unstructured or adaptive approach succeeds, then capturing the methodology which led to this success makes it adaptable for replication. The aims and objectives of this paper are to present details of the novel approach utilized in the Ogoniland study in the area of land access and community entry in a technical collaboration between UNEP and the Rivers State University of Science and Technology (RSUST), Nigeria. Project management may be defined as the investment of capital in a time bound intervention to create productive assets and the energy and inventiveness of people, plays an important role in projects and that this role is just as important as the expenditure of physical and financial resources (Cussworth & Franks, 1993). Projects vary in type and size and the cycles may also differ. The general idea however is that a project goes through several stages and phases from implementation to and subsequently evaluation. The assumption might be that this is a linear relationship but in real life experiences, it is much more complex. Acyclic pattern of projects is more popular. In social sciences related projects that deal with human capital, a more adaptive strategy is advocated. Project management involves the application of knowledge, skills, tools and techniques to project activities to meet project requirements (PMI, 2010, p. 8). It is a broad field but one of the significant requirements of a project management team is the ability to adapt their approach to the different concerns of various stakeholders. Projects do not take place in a vacuum but are implemented in a web of social, cultural, economic and other contexts. An understanding of the nature of a particular project will enable a reader appreciate the project management challenges involved therein. The UNEP Environmental Assessment of Ogoniland Project was commissioned by the Federal Government of Nigeria in 2006. The main purpose of the project was to assess the extent of oil pollution in Ogoniland following the failure of decades of negotiation, initiatives and protests to deliver a solution to oil production related unrest and crises in parts of the Niger Delta. The geographical description of Ogoniland as per the UNEP study, covers four Local Government Areas (LGAs) of Rivers State in Nigeria which include Khana LGA, Tai LGA, Gokana LGA and Eleme LGA. There are several ways to manage a project in order to achieve the desired objectives of the specific project but this cannot be accomplished without taking the project environment into consideration. The complexity, risk, size and resources including other socio-cultural or socio-economic considerations, will determine the final approach. The UNEP led Ogoniland project can best be described as a complex project considering the fact that it was risky, novel and intellectually complex and is a classic example of a project in which an adaptive strategy was applied throughout its duration working in an environment filled with suspicion and distrust and trying at the same time to win the confidence of the people to enable the project proceed. Essentially, the focus of any environmental assessment project is to collect relevant data, analyse it and produce a report on the findings therefrom. Such a simple description however does not match the complexity of the process as evidenced in actual field operations. As with other projects, the socio-cultural environment in which a project takes place presents its own challenges to a project management team and an understanding of the expectations of local community is essential. Several authors within the fields of project management advocate some for standardization, technique and procedures and while this is a laudable desire, the real world out there presents a changing world and the demands of project management become more of a subjective rather than a deterministic process. A project management process might succeed with a reasonable amount of latitude that allows flexibility and innovative particularly when working in developing countries. Decision making may then be based on what is feasible and achievable within a given scenario as against pre-determined models or a combination of both. There are several issues to consider in any project or in sub-projects of a main project. These include the methodology; implementation of the project management process; the project management culture and organizational structure; estimating; planning and scheduling, project execution; control and conflict management. In the case of sub-projects on the Ogoniland study, the larger project was subdivided into manageable units and single activities on the project where undertaken by project sub-teams along their thematic area or subcontracted out. The UNEP study project management function was executed by three major teams managed by an international project coordinator and overseen by UNEP Post Conflict and Disaster Management branch (PCDMB in Geneva and UNEP headquarters, in Nairobi. It is important to recognize the fact that different players in a project management team may want a methodology that is designed for their particular benefit and conflict may often arise between parties which need to be sorted out over the life of the project in order to deliver an end product, this project was not an excepting but through a series of meetings and project briefings, conflict issues were very easily resolved. The main structure as detailed in Fig. 1 is outlined below. The Technical team consisted of experts who covered four main thematic areas of the study which included contaminated land, vegetation, water and public health. The Cross-cutting teams involved remote sensing (analysis of satellite imagery and provision of aerial photography) legal and institutional reviews; sample management. Community surveys were undertaken by the RSUST team. Support Teams: There were several Support Teams who provided specific services to both the thematic teams and the Cross cutting teams. They included well drilling; Topographical Survey, Data Management; Health safety and logistics, Land Access Team; and the Community Liaison and Communication. The effective coordination of a mega project of this nature was impossible to accomplish without the technical, cross-cutting or support teams to achieve the project goals and objectives. This paper focuses on the activities of the RSUST/RIVPOLY driven Land Access Teams and challenges associated with their task and makes recommendation for replication in similar projects. A social and organizational theory framework underlies this study. The methodology uses an illustrative case study combined with field research data collection techniques and provides detailed description of the design and implementation of an innovative community entry and land access strategy developed by the UNEP project management team in collaboration with RSUST and executed jointly between RSUST and RIVPOLY in conjunction with the UNEP team. Although the field research methodology is generally described as qualitative research, it often includes quantitative dimensions. This study presents a descriptive account of land access activities undertaken during the Ogoniland project as primary data source, analyses it and presents a rich picture of the EIA project management process. Data collection was by participant observation and the examination of field records and analysis of documents produced within the group of land access personnel. The authors were participants on the project and as such participant observation methodology was considered suitable and was utilized to give a first-hand participant account of the project as it occurred. Through a process of self-analysis and project review, the findings are outlined. The advantage of this approach is that it presents a rich picture of actual process by using content analysis techniques on the daily field activity log and content analysis of the LAT daily records of field activities. At the commencement of the environmental assessment project, the tool available for accessing and inspecting impacted areas was a map showing areas with oil infrastructure and records of historical spills within the proposed study area. Very few community names were associated with these locations and a major challenge immediately identified, was how to access each impacted location without appearing to be trespassing, actually trespassing or carrying out activities that could instigate additional conflict in the area considering that the entire project had conflict resolution as its underlying goal. Considering that all sample collection activities involved land access to specific locations or across specific locations to nearby creeks or rivers, a robust land and transparent access strategy was required which would in the absence of available records of land ownership in the area. Land ownership verification required knowledge of the local land use patterns and traditional verification processes. The initial task in developing a community entry protocol was to obtain a clearer picture and understanding of the specific tasks that were to be undertaken by each of the four thematic technical teams and the cross cutting teams including what data they expected to collect during the field work, and how? The Land Access Team (LAT) members participated in the initiation and development of a community entry protocol that was based on an understanding of the traditional land access practices in Ogoniland. Generally, the Ogoni’s practice the traditional bush-entry systems where payments are demanded for and expected to be made prior to entry upon ancestral land. This activity is however be preceded by meetings with the community chiefs, elders, youth, women and children. The purpose of such meetings is usually to establish a relationship following which formal business talks can take place and land entry authorized with or without any financial payments. Each step had specific set of objectives and deliverables as shown in Fig. 2. The horizontal arrows indicate firewalls which consist of expected deliverables from the preceding step prior to proceeding further. Although fast tracking did occur it was based on first of all having assessed the potential risk in skipping any step. If it was not possible to actualise the deliverables from a preceding step, the process terminated at this point or was repeated before progressing to the next phase. The four (4) distinct phases are discussed further. Towards the later part of the project, a special reconnaissance protocol was developed for use in areas when the community had already been sensitized and there was no need for step 1 and the process commenced in steps 2 – 4, see Fig 2. The main purpose of the pre-entry reconnaissance step was an initial attempt to gain vehicular access to locations within close proximity of the impacted grids on the map of historical spills and to identify the Local Government Area (LGA) within which it falls. It was also to assess the likely physical access challenges envisaged at the actual reconnaissance phase. This process was facilitated using GIS tools and equipment and driven by the Community Liaison Assistants (CLA) Team, the project Technical Assistants (TA’s) with support from the health/safety and the security teams. Initial contact was established within the general geographical location of impacted areas and the surrounding communities identified. The deliverables from Step 1 included the identification of communities and the generation of follow-up activities for the Community Liaison Assistant (CLA) who took over the responsibility at this point to make actual contact with the community leaders and arrange a sensitization meeting in the Step 2. This was the most important outcome of the process and what was used to weigh the success or failure of a sensitization activity. The firewalls surrounding this initial step prohibited any physical land entry at this stage because it is assumed that the more remote villages would most probably not have been informed about the project. This could result has resulted in a misconception about the purpose of land entry and construing this action to mean violation of the traditional community land entry protocols or outright trespass. The CLA was the only support team member authorised to physically disembark from the project vehicle (except for technical reasons such as GPS signal failure), in order to interact with the locals to confirm the actual indigenous name of the location, obtain leads to the traditional leadership structure and possibly a contact person. This activity provided the basis upon which the CLA conducted follow-up community based investigation, established firm contacts and negotiated a community sensitization and stakeholder meeting for the project management team to be undertaken in Step 2. To meet and interact with the land owners and their leaders, youth, women and children well as to familiarize with each other. To inform and educate the community through their representatives on the project’s goals, objectives and the proposed pattern of field activity. To obtain democratically appointed community representatives who would work closely with the project team in all future dealings associated with land in the community and throughout several phases of the project. The nomination of community contact persons was the single most important outcome of this step. It indicated acceptance or otherwise and a measure for success or failure at later stages of the project. A step 2 meeting was considered to be inconclusive when contact persons are despite the level of project awareness it raises. The firewalls surrounding this step are hinged on obtaining the names of contact persons who would subsequently work with the project team, as community representatives. Without getting these names, the process could not proceed to the next step as community acceptance was not certain. Exceptions to this rule were where a step 2 meeting was either rescheduled or it was unanimously agreed that the names and contact telephone numbers of such representatives, would be forwarded at a later date to the CLA. The land access negotiation in step 3 was an important community based activity during which physical land entry occurs and owners/occupiers of impacted farmlands are identified as this was crucial to the future sampling activity and community surveys. The land access team, made up primarily of land management academics and professionals resident in and around the study area and armed with local knowledge of community perceptions and expectations in connection with land, worked with the UNEP team to develop community entry protocols for the technical teams, the cross cutting teams as well as the support teams throughout the life of the project. The land access team (LAT) coordinated this process and were taken by the community nominated representatives to visit all known oil spill sites within their area particularly the historical sites indicated on the UNEP map. These sites are geo-referenced and the LAT assesses the nature of the terrain, the immediate and potential accessibility challenges in view of a larger team visiting the area using project vehicles and carrying equipment in subsequent phases. Alternative access routes are explored and feedback was given to the project health, safety and logistic as well as the security support teams for planning. The community nominated representatives played a key role in the initial identification of family lands and actual land owners. Armed with an understanding of the land holding structure in the area which is by family, they guided the LAT to the elders, chiefs and family heads of plots of interest with which the LAT negotiated access. Acceptance during this level of investigation was measured by the nomination of persons at the family level to work with the UNEP teams during the sampling phase as labour hands and as family representatives. All de-bushing needs were dealt with using local community labour selected first by the specific family who owned the land and subsequently approved by the community youth leader(s) and nominated representatives. Conflict situations did arise occasionally where there were controversies on the boundaries of specific sites between different families. The usual approach then was to work with youth from both families which easily resolved the crisis. Ina situation where crops were to be removed to create access, appropriate compensation was estimated, negotiated and paid for. The deliverable from this exercise was a confirmed date or range of possible dates during which the family representatives would be present for the reconnaissance activities in step 4, to take place. Step 4 involved actual entry for the purpose of work in connection with drilling of boreholes and /or sample collection on community/family/individual land. During this activity, the technical team were physically on the land and were allowed to spend time carrying out their Reconnaissance activity. The CLA was also present throughout this activity while the LAT was there to ensure that all required de-bushing had been done and that the community nominated representatives were present to guide the TAs in such a way that they did not unknowingly stray into neighbouring farmlands or communities. Where this happened on a few occasions, the combined team of LAT and CLA’s were on the spot to sort out these issues with the agitating communities. In severe cases of conflict, the CLA took the matter to the LGA where it was later resolved. Step 4 of the land entry protocol in any location signalled the beginning of the sampling activity which commenced with the drilling of ground-water monitoring wells. There were several challenges during this phase particularly due to the fact that with certain spills, a community might have taken the TA’s to the impacted area within their own community boundaries while, the epicentre might actually have been in a neighbouring community for which access had not yet been negotiated. Sometimes an on-the- spot decision to quickly visit and see the epicentre angered those communities as their permission had not been sought prior to entry. The problem was usually much more complex in cases where multiple communities laid claim to a single impacted site. On the whole, LAT delivered all sites for reconnaissance and kept community members happy by making prompt payments for their time in the field. The reconnaissance phase signalled beginning of the sampling phase which was christened MARIO by the project management team. It was named after the one of the Chief Scientific Expert on the project. The Mario phase was packed with non-stop activity up until the end of the project. The land access team participated in all the activities shown See Fig. 4. The LAT participated in a cross section of project activities and were always in the field to deal with land access requirements or de-bushing to ensure the project team experts from any of the project thematic areas did not experience undue setbacks in the field. They covered all activities from drilling to socio-economics. As mentioned earlier, the CLA’s worked closely with the LAT particularly in identifying the owners. Their activities where fairly structured following a similar pattern developed earlier in the life of the project. As anticipated, during the initial phases, there were few instances where team members having not fully understood the essence of the firewalls between each phase, attempted to enter community land but were prevented from gaining access and in a few isolated cases, with threats from the community youth and in others, community names were submitted to the CLA at a later date. Members of the Land Access Team were present at over 95% of all Step II activities during the life of the project occasionally being unable to attend due to conflicting field assignments. In such cases, LAT depended heavily on feedback from the CLA’s regarding the nominated representatives. This was extremely important as the succeeding step depended on their knowledge of the community representatives. Where a sensitization activity ended without the appointment of community representatives, it became impossible to do any further work in the area. So this was a crucial step and a very traditional land access protocol for the Ogoni's. In most communities in Eleme and Khana LGAs, the community representatives were made up of 3 persons, the Chief Security Officer (CSO) of the community, the youth leader and a representative of the chief’s palace to give him feedback and progress reports. In Gokana and Tai Local Government Areas, the number was usually increased to 5 as they added a representative from the Landlords of impacted areas with oil infrastructure and members of the Pipeline Vigilante Contracting teams. The process had to be flexible enough to allow for these variations as we progressed from LGA to LGA. This activity involved actual visits through community farm tracks to the exact location of impacted. Vehicles were used but in a majority of the cases motorbikes were used or the full team walked long distances to reach these locations. This process was important in order to determine (ahead of the UNEP Technical team reconnaissance visit), the nature of the terrain, vehicular access challenges and the de-bushing requirements if any, to gain access to specific sites. The actual land owners of impacted sites were identified as this would be crucial to the success of the of the reconnaissance phase in terms of land entry and recruitment of unskilled labourers. cess enabled them understand a little more about the local terrains, visit with the owners of actual impacted sites and schedule visit for the project technical team to do an initial reconnaissance survey. This was a very challenging phase of the project with several security issues and success depended a lot on the interpersonal skills of the particular LAT member. Depending on the expanse of the impacted areas, the process usually lasted 2 – 3 day on the average. Armed with a GPS, the LAT member could give more precise feedback to the technical team regarding the actual physical location of the impacted area relative to the grid on the map, the motor able distance and alternative access routes, the expectations of the community members as well. If the area was overgrown and would make access difficult for the technical team in Step 4, LAT made arrangement for de-bushing making all necessary payments as appropriate. During each of these visits, LAT incurred financial expenditure on preliminary clearing to enable them reach the site, payment for upwards of 4 motorbikes that took them to the location and a modest remuneration for the community representatives who worked with them. The deliverable from this exercise was a firm date for the reconnaissance activities undertaken by the technical assistants. It is possible to say that the adaptive project management strategy used in the Environmental assessment of Ogoniland project was responsible for its timely completion and publication of the full report in 2011. The step-by community entry protocol enabled the formation of lasting friendship between community youth and members of the land access teams who gradually become constant figures within the community. By participating in the sensitization meetings in Step 2 and taking responsibility for nominating community contact persons to work with the UNEP team, a sense of ownership of the project and its process was developed by several communities. The process is replicable in similar projects. Boto, I., Peccerella, C., & Brasesco, F. (2012). Land Access and Rural Development: new challenges, new opportunities. Brussels. Cussworth, J. W., & Franks, T. R. (1993). Managing Projects in Developing Countries. Essex: Addison Wesley Longman Limited. Gardiner, P. (2005). Project Management: A strategic Planning Approach. Palgrave. Harrison, F., & Lock, D. (2004). Advanced Project Management - A Structured Approach. Aldershot: Gower publishing Company. Harrop, O. D., & Nixon, A. J. (1999). Environmental Assessment in Practice. London: Routledge. John J. Macionis, K. P. (2012). Sociology: A Global Introduction. Prentice Hall. Kerzner, H. (2003). Project Management - A Systems Approach to Planning, Scheduling and Controlling. New Jersey: John Wiley and Sons. Kerzner, H. (2003). Project Management Case Studies. New Jersey: John Wiley and Sons. Lock, D. (2007). The Essentials of Project Management. Aldershot: Gower Publishing Limited. Maylor, H. (2003). Project Management. Harlow: Pearson. Meredith, J., & Mantel, S. J. (2010). Project Management: A managerial Approach. Asia: Wiley and Sons. Modak, P., & Biswas, A. K. (1999). Conducting Environmental Impact Studies for Developing Countries. Japan, Japan: United Nations University Press. Newton, R. (2009). The Project Manager: Mastering the Art of Delivery. Prentice Hall. PMI. (2010). A Guide to the Project Management Body of Knowledge. Newtown, USA: PMI. Reiss, G. (2007). Project Management Demystified. New York, USA: Taylor and Francis. UNEP. (2011). Environmental Assessment of Ogoniland. Nairobi: UNEP. The support of the School of Real Estate and Planning, Henley Business School, University of Reading, United Kingdom, by way of access to library resources is hereby gratefully acknowledged.
2019-04-26T14:06:50Z
http://fig.net/resources/monthly_articles/2013/kakulu_et_al_april_2013.asp
If you did not previously create server objects, then enter the IP address of a Director Server. If you previously created server objects, then change the selection to Server Based, and select the server objects. Select the Director monitor, and click Select. Select your Director Service Group, and click Select. Select the certificate for this Director Load Balancing Virtual Server, and click Select. Set the Time-out to 0 minutes. This makes it a session cookie. Select Responder in the Choose Policy drop-down, and click Continue. Select the previously created Director_Redirect policy, and click Bind. On the left, expand Authentication and click Dashboard. In the Choose Server Type drop-down, select LDAP. Enter LDAP-Corp as the name. If you have multiple domains, you’ll need a separate LDAP Server per domain so make sure you include the domain name. Change the selection to Server IP. Enter the VIP of the load balancing vServer for LDAP. Click Test Connection. NetScaler will attempt to login to the LDAP IP. Scroll down. Or in Active Directory Users & Computers, enable Advanced view, browse to the object (don’t use Find), double-click the object, and switch to the Attribute Editor tab. Set Group Name Identifier to samAccountName. Set Group Search Attribute to memberOf. Select << New >> first. Set Group Search Sub-Attribute to CN. Select << New >> first. The status of the LDAP Server should be Up. The Authentication Dashboard doesn’t allow you to create the LDAP Policy so you must create it elsewhere. You can create the LDAP policy now. Or you can wait and create it later when you bind the LDAP Server to the NetScaler Gateway vServer. Go to NetScaler Gateway > Policies > Authentication > LDAP. Change the Server drop-down to the LDAP Server you created earlier. Give the LDAP Policy a name (one for each domain). In the Expression box, enter ns_true. You can even do a combination of policies: some with samAccountName and some with userPrincipalName. The samAccountName policies would be searched in priority order and the userPrincipalName policies can be used to override the search order. Bind the userPrincipalName policies higher (lower priority number) than the samAccountName policies. NetScaler 11.1 supports adding a domain name drop-down list to the logon page. Then use Cookie expressions in the auth policies and session policies. However, this probably doesn’t work for Receivers. See CTX203873 How to Add Drop-Down Menu with Domain Names on Logon Page for NetScaler Gateway 11.0 64.x and later releases for details. Another option for a domain drop-down is nFactor Authentication for Gateway. This also doesn’t work with Receiver Self-service. After authentication is complete, a Session Policy will be applied that has the StoreFront URL. The NetScaler Gateway will attempt to log into StoreFront using Single Sign-on so the user doesn’t have to login again. When logging into NetScaler Gateway, only two fields are required: username and password. However, when logging in to StoreFront, a third field is required: domain name. So how does NetScaler specify the domain name while logging in to StoreFront? AAA Group – Configure multiple session policies with unique Single Sign-on Domains. Inside the Session Policy is a field called Single Sign-on Domain for specifying the domain name. If there is only one Active Directory domain, then you can use the same Session Policy for all users. However, if there are multiple domains, then you would need multiple Session Policies, one for each Active Directory domain. But as the NetScaler loops through the LDAP policies during authentication, once a successful LDAP policy is found, you need a method of linking an LDAP policy with a Session Policy that has the corresponding SSO Domain. This is typically done using AAA groups. To use this method, see Multiple Domains – AAA Group Method. userPrincipalName – Alternatively, configure the LDAP policy/server to extract the user’s UPN, and then authenticate to StoreFront using UPN. This is the easiest method but some domains don’t have userPrincipalNames configured correctly. In each of your NetScaler LDAP policies/servers, in the Other Settings section, in the SSO Name Attribute field, enter userPrincipalName (select –<< New >>– first). Make sure there are no spaces after this attribute name. NetScaler will use this pull this attribute from AD, and use it to Single Sign-on the user to StoreFront. On the NetScaler Gateway Virtual Server, bind LDAP authentication polices in priority order. It will search them in order until it finds a match. In your Session Policies/Profiles, in the Published Applications tab, make sure Single Sign-on Domain is not configured. Since NetScaler is using the userPrincipalName there’s no need to specify a domain. If Single Sign-on Domain is configured then Single Sign-on authentication will fail. Another method of specifying the domain name when performing Single Sign-on to StoreFront is to use a unique session policy/profile for each domain. Use AAA Groups to distinguish one domain from another. On the right, switch to the Servers tab. Make sure all domains are in the list. Edit one of the domains. In the Default Authentication Group field, enter a new, unique group name. Each domain has a different group name. Click OK. Edit another domain and specify a new unique group name. Each domain has a different group name. Name the group so it exactly matches the group name you specified in the LDAP server. Click OK. On the right, in the Advanced Policies section, add the Policies section. Select Session, and click Continue. Click the plus icon to create a new policy. Give it a name that indicates the domain. You will have a separate policy for each domain. Click the plus icon to create a new profile. Give the Profile a name that indicates the domain. You will have a separate profile for each domain. Check the Override Global box next to Single Sign-on Domain. Enter the domain name that StoreFront is expecting. Click Create. Give the policy a ns_true expression, and click Create. In the Priority field, give it a number that is lower than any other Session Policy that has Single Sign-on Domain configured. Click OK. Give it a name for the next domain. Create another Session Policy for the next domain. Create another profile for the next domain. On the Published Applications tab, specify the domain name of the next domain. Bind the new policy with a low Priority number. When a user logs in, NetScaler loops through LDAP policies until one of them works. NetScaler adds the user to the Default Authentication Group specified in the LDAP Server. NetScaler finds a matching AAA Group and applies the Session Policy that has SSON Domain configured. Since the policy is bound with a low priority number, it overrides any other policy that also has SSON Domain configured. An ldaps monitor can be used to verify that the Domain Controller is functional. The ldaps monitor will login as an account, perform an LDAP query, and look for a successful response. The ldaps monitor uses a service account to login. Make sure the service account’s password does not expire. Domain User permissions are sufficient. Since this monitor is a Perl script, it uses NSIP as the source IP. You can use RNAT to override this as described in CTX217712 How to Force scriptable monitor to use SNIP in Netscaler in 10.5. Note: Perl monitor uses NSIP as the source IP. You can use RNAT to override this as described in CTX217712 How to Force scriptable monitor to use SNIP in Netscaler in 10.5. Name the monitor ldaps-Corp or similar. The ldaps monitor logs into Active Directory, performs an LDAP query, and looks for a successful response. The monitor configuration has domain specific information so if you have multiple Active Directory domains, then you will need multiple ldaps monitors. Include the domain name in the monitor name. If you have multiple domains then create additional monitors: one for each domain. Change the Protocol to SSL_TCP. Scroll down and click OK. If you did not create server objects, then enter the IP address of a Domain Controller in this datacenter. If you previously created a server object, then change the selection to Server Based, and select the server object. In the Port field, enter 636 (LDAPS). Click the ellipsis next to a member and click Monitor Details. It should say Success – Probe succeeded. Click Close. Click Close and Done to finish creating the Service Group. Name it LDAPS-Corp-HQ-LB or similar. You will create one Virtual Server per datacenter so include the datacenter name. Also, each domain has a separate set of Virtual Servers so include the domain name. Enter a Virtual IP. This VIP cannot conflict with any other IP + Port already being used. You can use an existing VIP that is not already listening on TCP 636. Select the previously created Service Group and click Select. Create additional Virtual Servers for each datacenter. These additional Virtual Servers do not need a VIP so change the IP Address Type to Non Addressable. Only the first Virtual Server will be directly accessible. After you are done creating a Virtual Server for each datacenter, click the ellipsis next to the primary datacenter’s Virtual Server and click Edit. Note: This is a Perl monitor, which uses the NSIP as the source IP. You can use RNAT to override this as described in CTX217712 How to Force scriptable monitor to use SNIP in Netscaler in 10.5. Name it StoreFront or similar. Change the Type drop-down to STORERONT. If you will use SSL to communicate with the StoreFront servers, then scroll down, and check the box next to Secure. In the Store Name field, enter the name of your store (e.g. MyStore) without spaces. Give the Service Group a descriptive name (e.g. svcgrp-StoreFront-SSL). Change the Protocol to HTTP or SSL. If the protocol is SSL, ensure that the StoreFront Monitor has Secure checked. If you did not create server objects then enter the IP address of a StoreFront Server. If you previously created a server object then change the selection to Server Based and select the server objects. On the right, under Advanced Settings , click Monitors. Click where it says says No Service Group to Monitor Binding. Select your StoreFront monitor and click Select. The Last Response should be Success – Probe succeeded. Click Close twice. On the right, under Advanced Settings, click Settings. On the left, in the Settings section, check the box for Client IP and enter X-Forwarded-For as the Header. Then click OK. If the Service Group is http and you don’t have certificates installed on your StoreFront servers (aka SSL Offload) then you’ll need to enable loopback in StoreFront. In StoreFront 3.5 and newer, you enable it in the GUI console. In StoreFront 3.0, run the following commands on the StoreFront 3.0 servers as detailed at Citrix Blog Post What’s New in StoreFront 3.0. Name it lbvip-StoreFront-SSL or similar. Select your StoreFront Service Group and click Select. Select the certificate for this StoreFront Load Balancing Virtual Server and click Select. On the left, in the Persistence section, select SOURCEIP. Do NOT use COOKIEINSERT persistence or Android devices will not function correctly. Set the timeout to match the timeout of Receiver for Web. If the NetScaler communicates with the StoreFront servers using HTTP (aka SSL Offload – 443 on client-side, 80 on server-side), and if you have enabled the Default SSL Profile, then you’ll either need to edit the Default SSL Profile to include the SSL Redirect option, or create a new custom SSL Profile with the SSL Redirect option enabled, and then bind the custom SSL Profile to this vServer. If the default SSL Profile is not enabled, then you’ll need to edit the SSL Parameters section on the vServer, and at the top right, check the box next to SSL Redirect. Otherwise the Receiver for Web page will never display. When connecting to StoreFront through load balancing, if you want to put the server name on the StoreFront webpage so you can identify the server, see Nicolas Ignoto Display server name with Citrix StoreFront 3. Users must enter https:// when navigating to the StoreFront website. To make it easier for the users, enable SSL Redirection. This procedure details the SSL Load Balancing vServer method of performing an SSL redirect. An alternative is to use the Responder method. On the right, find the SSL Virtual Server you’ve already created, click the ellipsis next to it and click Edit. In the Redirect from Port field, enter 80. In the HTTPS Redirect URL field, enter your StoreFront Load Balancing URL (e.g. https://storefront.corp.com). Scroll down and click Continue twice. This method does not add any new vServers to the list so it’s not easy to see if this is configured. Create a DNS Host record that resolves to the new VIP. The DNS name for StoreFront load balancing must be different than the DNS name for NetScaler Gateway. Unless you are following the Single FQDN procedure. Enter the new Base URL in https://storefront.corp.com format. This must match the certificate that is installed on the load balancer. Click OK. If you have multiple StoreFront clusters (separate datacenters), you might want to replicate subscriptions between them. StoreFront subscription replication uses TCP port 808. To provide High Availability for this service, load balance TCP port 808 on the StoreFront servers. See Configure subscription synchronization at Citrix Docs for more information. Give the Service Group a descriptive name (e.g. svcgrp-StoreFront-SubRepl). Change the selection to Server Based and select the StoreFront servers. Enter 808 as the port. Then click Create. Select the tcp monitor and click Select. Then click Bind and click Done. On the right, click the ellipsis next to the existing StoreFront Load Balancing vServer, and click Add. Name it lbvip-StoreFront-SubRepl or similar. Specify the same VIP that you used for SSL Load Balancing of StoreFront. Enter 808 as the Port. Click where it says No Load Balancing Virtual Server ServiceGroup Binding. Select your StoreFront Subscription Replication Service Group and click Select. This page contains generic SSL instructions for all SSL Virtual Servers including: Load Balancing, NetScaler Gateway, Content Switching, and AAA. Ryan Butler has a PowerShell script at Github that can automate NetScaler SSL configuration to get an A+. The last cipher is only needed for Windows XP machines. It doesn’t actually require SSL3. If you don’t need to support Windows XP, then skip that command. Or you can create the cipher group using the GUI. Go to Traffic Management > SSL > Cipher Groups. Use the up and down arrows to order the ciphers. NetScaler prefers the ciphers on top of the list, so the ciphers at the top of the list should be the most secure ciphers. To get an A+ at SSLLabs.com, you need to insert the Strict-Transport-Security HTTP header in the responses. NetScaler Rewrite Policy can do this. Go to AppExpert > Rewrite, right-click Rewrite, and click Enable Feature. Go to AppExpert > Rewrite > Actions. Name the action insert_STS_header or similar. The Type should be INSERT_HTTP_HEADER. The Header Name should be Strict-Transport-Security. On the left, go to AppExpert > Rewrite > Policies. Name it insert_STS_header or similar. In the Expression box, enter HTTP.REQ.IS_VALID. Now you can bind this Rewrite Response policy to HTTP-based SSL vServers. When editing an SSL vServer, if the Policies section doesn’t exist on the left, then add it from the Advanced Settings column on the right. In the Policies section on the left, click the plus icon. Select Rewrite > Response and click Continue. Then select the STS Rewrite Policy and click Bind. You can use SSL Profiles to package several SSL settings together and apply the settings package (Profile) to SSL vServers and SSL Services. These settings include: disable SSLv3, bind ciphers, bind ECC curves, etc. There are default SSL Profiles, and there are custom SSL Profiles. The default SSL Profiles are disabled by default. Once the default SSL Profiles are enabled, the default setttings apply to all SSL vServers and all SSL Services, unless you bind a custom SSL Profile. Also, once default is enabled, it’s not possible to disable it. Some features of custom SSL Profiles require default SSL Profiles to be enabled. For example, you cannot configure ciphers in a custom SSL Profile unless the default SSL Profiles are enabled. If you enable the default SSL Profiles, then it’s not possible to configure SNI for backend (services and service groups). Default SSL Profiles are intended to provide a baseline SSL configuration for all newly created SSL Virtual Servers and SSL Services. You can still create Custom SSL Profiles to override the Default SSL Profiles. Make sure you are connected to the appliance using http and not https. Click the ellipsis next to the frontend or backend default profile and click Edit. Frontend = client-side connections to SSL vServers. Backend = server-side connections (SSL Services and Service Groups). Or you can create a new custom SSL profile. Scroll down to the SSL Ciphers section and click the pencil icon. Click Remove All and click OK. You must click OK before binding the custom cipher group. Click the pencil icon again. Scroll down and select your custom cipher group. Then click the arrow to move it to the right. Then click OK. Click OK when you see the No usable ciphers message. Then click Done to close the SSL Profile. If you edit one of your SSL Virtual Servers (e.g. Load Balancing vServer), there’s an SSL Profile section indicating that the default profile is being used. You can change the binding to a different SSL Profile. SSL Profiles do not include forcing Strict Transport Security. You’ll still need to create the STS Rewrite Policy and bind it to every SSL vServer as detailed in the next section. Whether you use SSL Profiles or not, you need to bind certificates and STS Rewrite Policy to every SSL vServer. If you enabled the Default SSL Profiles feature, you can either leave it set to the Default SSL Profile; or you can change it to a Custom SSL Profile. Or you can bind an SSL Profile without enabling the Default SSL Profiles. If you don’t use the SSL Profiles feature, then you’ll need to manually configure ciphers and SSL settings on every SSL vServer. When creating an SSL Virtual Server (e.g. SSL Load Balancing vServer), on the left, in the Certificates section, click where it says No Server Certificate. If you want to bind a custom SSL Profile, if Default SSL Profile is enabled, in the SSL Profile section on the left, click the pencil icon. Select your custom SSL Profile and click OK. If you didn’t bind an SSL Profile, on the left, in the SSL Parameters section, click the pencil icon. If you didn’t bind an SSL Profile, scroll down to the SSL Ciphers section and click the pencil icon. Click OK when you see the No usable ciphers message. SSL Virtual Servers created on newer versions of NetScaler will automatically have ECC Curves bound to them. However, if this appliance was upgraded from an older version then the ECC Curves might not be bound. If you are not using SSL Profile, then on the right, in the Advanced Settings section, click ECC Curve. If the Policies section doesn’t exist on the left, then add it from the Advanced Settings column on the right. Select the STS Rewrite Policy and click Bind. New in NetScaler 11.1, you can configure SSL Redirect directly in an SSL Load Balancing vServer (port 443) instead of creating a separate HTTP (port 80) Load Balancing vServer. This is only an option for SSL Load Balancing vServers; it’s not configurable in Gateway vServers or Content Switching vServers. Only one Redirect URL can be specified. Alternatively, the Responder method can handle multiple FQDNs to one VIP (e.g. wildcard certificate) and/or IP address URLs. Edit the SSL Load Balancing vServer (port 443). In the HTTPS Redirect URL field, enter https://MyFQDN. Click Continue twice. The Down Virtual Server Method is easy, but the Redirect Virtual Server must be down in order for the redirect to take effect. Another option is to use Responder policies to perform the redirect. On the right, find an SSL Virtual Server you’ve already created, click the ellipsis next to it, and click Add. Doing it this way copies some of the data from the already created Virtual Server. Or if you are redirecting NetScaler Gateway, create a new Load Balancing vServer with the same VIP as the Gateway. The IP Address should already be filled in. It must match the original SSL Virtual Server (or Gateway vServer). Click OK. In the Redirect URL field, enter the full URL including https://. For example: https://storefront.corp.com/Citrix/StoreWeb. Click OK. When you view the SSL redirect Virtual Server in the list, it will have a state of DOWN. That’s OK. The Port 80 Virtual Server must be DOWN for this redirect method to work. The Down Virtual Server Method is easy, but the Redirect Virtual Server must be down in order for the redirect to take effect. Another option is to use Responder policies to perform the redirect. This method requires the Redirect Virtual Server to be UP. Create a dummy Load Balancing service. This dummy service can be bound to multiple Redirect Virtual Servers. Go to Traffic Management > Load Balancing > Services. Use a loopback IP address (e.g. 127.0.0.1). After the service is created, it changes to a NetScaler-owned IP. Enter an expression. The following expression can be used by multiple Redirect Virtual Servers since it redirects to https on the same URL the user entered in the browser. Or you can create a Responder Action with a more specific Target. Click Create. Bind the AlwaysUp service and click Bind. Then click Continue. Select the http_to_https Redirect Responder policy and click Bind. Then click Done. GSLB is nothing more than DNS. GSLB is not in the data path. GSLB receives a DNS query, and GSLB sends back an IP address, which is exactly how a DNS server works. The user then connects to the returned IP, which doesn’t even need to be on a NetScaler. GSLB is only useful if you have a single DNS name that could resolve to two or more IP addresses. If there’s only one IP address, then use normal DNS instead. When configuring GSLB, don’t forget to ask “where is the data?”. For XenApp/XenDesktop, DFS multi-master replication of user profiles is not supported, so configure “home” sites for users. More information at Citrix Blog Post XenDesktop, GSLB & DR – Everything you think you know is probably wrong! GSLB Configuration can be split between one-time steps for GSLB infrastructure, and repeatable steps for each GSLB-enabled DNS name. Create ADNS listener on each NetScaler pair – DNS clients send DNS queries to the ADNS listeners. GSLB resolves a DNS query into an IP address, and returns the IP address in the DNS response. Create GSLB Sites (aka MEP Listener) – GSLB Sites usually correspond to different datacenters. GSLB Sites are also the IP address endpoints for NetScaler’s proprietary Metric Exchange Protocol (MEP), which is used by GSLB to transmit proximity, persistence, and monitoring information. Import Static Proximity Database – NetScaler includes a database that can be used to determine the geographical location of an IP address. Or you can subscribe to a geolocation service, and import its database. Delegate DNS sub-zone to NetScaler ADNS – in the original DNS zone, create a new sub-zone (e.g. gslb.company.com), and delegate the sub-zone to all ADNS listeners. Create one or more GSLB Services per DNS name, and per IP address response – each GSLB Service corresponds to a single IP address that can be returned in response to a DNS Query. Optionally, bind a Monitor to each GSLB Service. Monitors determine if the GSLB Service is up or not. Bind a DNS name to the GSLB Virtual Server. For active/active – bind multiple GSLB Services to the GSLB Virtual Server, configure a load balancing method (e.g. proximity), and configure site persistence. For active/passive – bind the active GSLB Service. Create another GSLB Virtual Server with passive GSLB Service, and configure as Backup Virtual Server. Create CNAME records for each delegated DNS name – in the main DNS zone, create a CNAME that maps the original DNS name to the delegated sub-zone. For example, CNAME citrix.company.com to citrix.gslb.company.com. You will create separate GSLB Services, separate GSLB Virtual Servers, and separate CNAMEs for each DNS name. If you have a bunch of DNS names that you want to GSLB-enable, then you’ll repeat these steps for each GSLB-enabled DNS name. Each datacenter has a separate ADNS listener IP address. DNS is delegated to all GSLB ADNS Listener IPs, and any one of them can respond to the DNS query. Thus, all NetScaler pairs participating in GSLB should have the same Per-DNS name configuration. One NetScaler appliance for both public DNS/GSLB and internal DNS/GSLB? GSLB can be enabled both publically and internally. For public GSLB, configure it on DMZ NetScaler appliances, and expose the DNS listener to the Internet. For internal GSLB, configure it on separate internal NetScaler appliances/instances, and create an internal DNS listener. Each NetScaler appliance only has one DNS table, so if you try to use the same NetScaler for both public DNS and internal DNS, then be aware that external users can query for internal GSLB-enabled DNS names. As described by Phil Bossman in the comments, you can use a Responder policy to prevent external users from reading internal DNS names. Let’s say you have a single DNS name citrix.company.com. When somebody external resolves the name, it should resolve to a public IP. When somebody internal resolves the name, it should resolve to an internal IP. For internal GSLB and external GSLB of the same DNS name on the same NetScaler appliance, you can use DNS Policies and DNS Views to return different IP addresses depending on where users are connecting from. See Citrix CTX130163 How to Configure a GSLB Setup for Internal and External Users Using the Same Host Name. If the Internet circuit in the remote datacenter goes down, then this should affect public DNS, since you don’t want to give out a public IP that isn’t reachable. But do you also want an Internet outage to affect internal DNS? Probably not. In that case, you would need different GSLB monitoring configurations for internal DNS and external DNS. However, if you have only a single GSLB Virtual Server with DNS Views, then you can’t configure different monitoring configurations for each DNS View. Route GSLB Metric Exchange Protocol (MEP) across the Internet. If MEP goes down, then all IP addresses associated with the remote GSLB Site are assumed to be down, and thus the local NetScaler will stop giving out those remote IP addresses. Bind explicit monitors to each GSLB Service, and ensure the monitoring is routed across the Internet. GSLB is separate from data traffic. The GSLB IP addresses are separate from the IP addresses needed for data. ADNS Listener IP: A NetScaler IP that listens for DNS queries. The ADNS listener IP is typically an existing SNIP on the appliance. For external DNS, create a public IP for the ADNS Listener IP, and open UDP 53, so Internet-based DNS servers can access it. A single NetScaler appliance can have multiple ADNS listeners – typically one ADNS listener for public, and another ADNS listener for internal. GSLB Site IP / MEP listener IP: A NetScaler IP that will be used for NetScaler-to-NetScaler GSLB communication. This communication is called MEP or Metric Exchange Protocol. MEP transmits the following between GSLB-enabled NetScaler pairs: load balancing metrics, proximity, persistence, and monitoring. GSLB Sites – On NetScaler, you create GSLB Sites. GSLB Sites are the endpoints for the MEP communication. Each NetScaler pair is configured with the MEP endpoints for the local appliance pair, and all remote appliance pairs. TCP Ports – MEP uses port TCP 3009 or TCP 3011 between the NetScaler pairs. TCP 3009 is encrypted. The ADNS IP address can be used as the MEP endpoint IP. MEP endpoint can be any IP – The MEP endpoint IP address can be any IP address and does not need to be a SNIP or ADNS. One MEP IP per appliance – there can only be one MEP endpoint IP address on each NetScaler pair. Route MEP across Internet? – If you route MEP across the Internet, and if the MEP connection is interrupted, then Internet at one of the locations is probably not working. This is an easy way to determine if remote Internet is up or not. If you don’t route MEP across the Internet, then you’ll need to configure every remote-site GSLB Service with a monitor to ensure that the remote Internet is up. Public IPs for MEP Enpoints – if you route MEP across the Internet, then you’ll need public IPs for each publically-accessible MEP endpoint IP address. Public Port for MEP: Open port TCP 3009 between the MEP Public IPs. Make sure only the MEP IPs can access this port on the other NetScaler. Do not allow any other device on the Internet to access this port. Port 3009 is encrypted. GSLB Sync Ports: To use GSLB Configuration Sync, open ports TCP 22 and TCP 3008 (secure) from the NSIP (management IP) to the remote public MEP IP. The GSLB Sync command runs a script in BSD shell and thus NSIP is always the Source IP. Public IP Summary: In summary, for public GSLB, if MEP and ADNS are listening on the same IP, then you need one new public IP that is NAT’d to the DMZ IP that is used for ADNS and MEP (GSLB Site IP). Each datacenter has a separate public IP. DNS is delegated to all public ADNS IP listeners. At System > Network > IPs, identify a NetScaler-owned IP that you will use as the ADNS listener. This is typically a SNIP. Create a public IP for the ADNS Service IP, and configure firewall rules. On the left, expand Traffic Management > Load Balancing, and click Services. In the Protocol drop-down field, select ADNS. Scroll down and click Done, to close the Load Balancing Service properties. On the left of the console, expand System, expand Network, and then click IPs. On the right, you’ll see the SNIP is now marked as the ADNS svc IP. Repeat ADNS configuration on the other appliance pair in the other datacenter. Your NetScaler appliances are now DNS servers. NetScaler 11.1 build 51 and newer includes DNS Security Options at Security > DNS Security, which can protect your ADNS service. To protect ADNS, set the Profile to All DNS Endpoints. This section details MEP configuration between two GSLB Sites. See Citrix Docs for larger Parent-Child Topology Deployment using the MEP Protocol, including new features in NetScaler 11.1 build 51 and newer. The local GSLB Site IP can be any IP. Or you can use the same SNIP, and same public IP, used for ADNS. On the left, expand Traffic Management, right-click GSLB, and enable the feature. Expand GSLB, and click Sites. We’re adding the local site first. Enter a descriptive name for the local site. In the Site Type drop-down, select LOCAL. In the Site IP Address field, enter an IP that this appliance will listen for MEP traffic. This is typically a DMZ SNIP. For Internet-routed GSLB MEP, in the Public IP Address field, enter the public IP that is NAT’d to the GSLB Site IP. For internal GSLB MEP, there is no need to enter anything in the Public IP field. Scroll down, and click Create, to close the Create GSLB Site page. Go back to System > Network > IPs, and verify that the IP is now marked as a GSLB site IP. If you want to use the GSLB Sync Config feature, then you’ll need to edit the GSLB site IP, and enable Management Access. Scroll down, and enable Management Access. SSH is all you need. Go to the other appliance pair,and also create the Local GSLB site using its GSLB site IP, and its public IP that is NAT’d to the GSLB site IP. Enter a descriptive name for the remote site. Select REMOTE as the Site Type. In the Public IP Address field, enter the public IP that is NAT’d to the GSLB Site IP on the other appliance. For MEP, TCP 3009 must be open from the local GSLB Site IP, to the remote public Site IP. For GSLB sync, TCP 22, and TCP 3008, must be open from the local NSIP, to the remote public Site IP. On the left, expand System, expand Network, and click RPC. On the right, right-click the new RPC address (the other site’s GSLB Site IP), and click Edit. If your local GSLB Site IP is not a SNIP, then you’ll need to change the RPC Node to use the local GSLB Site IP as the source IP. In the Source IP Address field, enter the local GSLB Site IP. you can do this at Traffic Management > GSLB, on the right, in the left column, click Change GSLB settings. In the GSLB Service State Delay Time (secs) field, enter a delay before the GSLB Services are marked as down when MEP goes down. In the NetScaler GUI, on the left, expand Traffic Management, expand GSLB, expand Location, and click Static Databases. Browse to /var/netscaler/inbuilt_db/, and open Citrix_NetScaler_InBuilt_GeoIP_DB.csv. To browse to the directory, select var, and then click Open. Repeat for each directory until you reach /var/netscaler/inbuilt_db. In the Location Format field, if using the built-in database, select netscaler, and click Create. On the left, expand Traffic Management, expand GSLB, expand Location, and click Custom Entries. Enter a range of IP addresses for a particular location. Enter a Location Name in Geo Location format, which is typically six location words separated by periods. You can look in the static proximity database for examples. Continue creating Custom Entries for other private IP blocks. GSLB Services represent the IP addresses that are returned in DNS Responses. The IP addresses represented by GSLB Services do not need to be on a NetScaler, but NetScaler-owned IP addresses (e.g. load balancing VIPs) have additional GSLB Site Persistence options (e.g. cookie-based persistence). Each potential IP address in a DNS response is a separate GSLB Service. GSLB Services are associated with GSLB Sites. GSLB Service configuration is identical for active/active and active/passive. GSLB Virtual Server define active/active or active/passive, not GSLB Services. GSLB should be configured identically on all NetScaler pairs that are responding to DNS queries. Since you have no control over which NetScaler will receive the DNS query, you must ensure that both NetScaler pairs are giving out the same DNS responses. On the left, expand Traffic Management > GSLB, and click Services. Select one of the GSLB Sites. The IP address you’re configuring in this GSLB Service should be geographically located in the selected GSLB Site. On the bottom part, if the IP address is owned by this NetScaler (Local Site), then select Virtual Servers, and select a Virtual Server that is already defined on this appliance. It should automatically fill in the other fields. If you see a message asking if you wish to create a service object, click Yes. This option is only available when creating a GSLB Service in the Local GSLB Site. If the IP address is not owned by this NetScaler, then change the selection to New Server, and enter the remote IP address in the Server IP field. The Server IP field is the IP address that NetScaler will monitor for reachability. If the remote IP is owned by a different NetScaler that is reachable by MEP, then enter the actual VIP configured on that remote NetScaler. The Server IP does not need to match what is returned to the DNS Query. In the Public IP field, enter the IP address that will be returned to the DNS Query. If you leave Public IP blank, then NetScaler will copy the Server IP to the Public IP field. For Public GSLB, the Public IP field is usually a Public IP address. For internal GSLB, the Public IP field is usually an internal IP, and probably matches the Server IP. GSLB Service Monitoring – on the right, in the Advanced Settings column, you can click Monitors to bind a monitor to this GSLB Service. Review the following notes before you bind a monitor. Local NetScaler VIP – If the GSLB Service IP is a VIP on the local appliance, then GSLB will simply use the state of the local traffic Virtual Server (Load Balancing, Content Switching, or Gateway). There’s no need to bind a monitor to the GSLB Service. Remote NetScaler VIP – If the GSLB Service IP is a VIP on a remote appliance, then GSLB will use MEP to ask the other appliance for the state of the remote traffic Virtual Server. In both cases. There’s no need to bind a monitor to the GSLB Service. GSLB Monitor overrides other Monitoring methods – If you bind a monitor to the GSLB Service, then MEP and local Virtual Server state are ignored (overridden). IP is not on a NetScaler – If the GSLB Service IP is not hosted on a NetScaler, then only GSLB Service monitors can determine if the Service IP is up or not. Monitor remote Internet – For Public DNS, if MEP is not routed through the Internet, then you need some method of determining if the remote Internet circuit is up or not. In that case, you’ll need to bind monitors directly to the GSLB Service. The route of the Monitor should go across the Internet. Or you can ping the Internet router in the remote datacenter to make sure it’s reachable. Traffic Domains – If the GSLB Service IP is in a non-default Traffic Domain, then you will need to attach a monitor, since GSLB cannot determine the state of Virtual Servers in non-default Traffic Domains. Active/Active Site Persistence – If you intend to do GSLB active/active, and if you need site persistence, then you can configure your GSLB Services to use Connection Proxy or HTTP Redirect. See Citrix Blog Post Troubleshooting GSLB Persistence with Fiddler for more details. This only works with GSLB Service IPs that match Virtual Server VIPs on NetScaler appliances reachable through MEP. Scroll down, and click Done, to finish creating the GSLB Service. Create additional GSLB Services for each IP address that will be returned to a DNS query. On the left, expand Traffic Management, and click GSLB. On the right, click View GSLB Configuration. This shows you all of the CLI commands for GSLB. Look for add gslb service commands. You can copy them, and run them (SSH) on other NetScaler pairs that are participating in GSLB. The GSLB Virtual Server is the entity that the DNS name is bound to. GSLB Virtual Server then gives out the IP address of one of the GSLB Services that is bound to it. Create a GSLB Virtual Server for the Passive IP address. Bind the Passive GSLB Service to the Passive GSLB Virtual Server. Create another GSLB Virtual Server for the Active IP address. Bind the Active GSLB Service to the Active GSLB Virtual Server. Configure Backup Virtual Server pointing to the Passive GSLB Virtual Server. Bind a DNS name to the Active GSLB Virtual Server. Repeat the GSLB Virtual Server configuration on other NetScaler pairs participating in GSLB. Delegate the DNS name to NetScaler ADNS. Create one GSLB Virtual Server. Bind two or more GSLB Services to the Virtual Server. Source IP persistence is configured on the GSLB Virtual Server. Cookie Persistence is configured on the GSLB Services. If you configure GSLB to use Static Proximity Load Balancing Method, a new DNS feature called ECS will contain the actual DNS client IP. This dramatically improves the accuracy of determining a user’s location. Without this setting, GSLB can only see the IP address of the user’s configured DNS server instead of the real client IP. In the ADNS Service section, click OK. If you are configuring active/passive using the backup GSLB Virtual Server method, create a second GSLB Virtual Server that has the passive GSLB service bound to it. Don’t bind a Domain to the second GSLB Virtual Server. Then edit the Active GSLB Virtual Server and use the Backup Virtual Server section to select the second GSLB Virtual Server. On the left, if you expand Traffic Management > DNS, expand Records, and click Address Records, you’ll see a new DNS record for the GSLB domain you just configured. Notice it is marked as GSLB DOMAIN. Configure identical GSLB Virtual Servers on the other NetScaler appliance. Both NetScalers must be configured identically. You can also synchronize the GSLB configuration with the remote appliance as detailed in the next section. To manually sync the GSLB configuration from one GSLB Site to another, go to Traffic Management > GSLB. On the right, in the right column, click Synchronize configuration on remote sites. NetScaler 11.1 build 51 and newer has an automatic GSLB Configuration Sync feature, which automatically syncs the GSLB config every 15 seconds. To enable it on the master appliance, go to Traffic Management > GSLB. On the right, in the left column, click Change GSLB settings. Check the box next to Automatic Config Sync. Only enable this on the one appliance where you are configuring GSLB and want that GSLB config synced to other appliance. The automatic sync log can be found at /var/netscaler/gslb/periodic_sync.log. When syncing GSLB Services, it tries to create LB Server objects on the remote appliance. If the GSLB Service IP matches an existing LB Server object, then the GSLB sync will fail. Check the Sync logs for details. You’ll have to delete the conflicting LB Server object before GSLB Sync works correctly. In NetScaler 11.1 build 51 and newer, you can test GSLB DNS name resolution from the GUI by going to Traffic Management > GSLB, and on the right, in the left column, click Test GSLB. Select a GSLB Domain Name. Select an ADNS Service IP, and click Test. The test performs a dig against the ADNS IP. Verify that the response contains the IP address you expected. Another method of testing GSLB is to simply point nslookup to the ADNS services, and submit a DNS query for one of the DNS names bound to a GSLB vServer. Run the query multiple times to make sure you’re getting the response you expect. The NetScaler ADNS services at both GSLB sites should be giving the same response. For the built-in database, browse to /var/netscaler/inbuilt_db/ and open Citrix_NetScaler_InBuilt_GeoIP_DB.csv. To browse to the directory, select var and then click Open. Repeat for each directory until you reach /var/netscaler/inbuilt_db. If using GeoLite Country, select geoip-country and click Create.
2019-04-25T16:24:38Z
https://www.carlstalhood.com/category/netscaler/netscaler-11-1/load-balancing-netscaler-11-1/
Tongues will cease, but this has not yet happened. 1 Cor 13:8 Along with other gifts, tongues will eventually cease. Some people claim that this took place when the last of the twelve apostles died. Such people claim that the "perfect", (1 Cor 13:10) refers to the "completed canon of scripture", and that this represents the perfection of our knowledge. Such an interpretation (see also the appendix) cannot be sustained: Firstly, scripture itself makes no reference to such a definitive "completion of the canon of scripture" (Note that Rev 22:18-19 can be legitimately be applied only to the book of Revelation). Secondly 1 Cor 13:12 shows how 1 Cor 13:9-10 are to be correctly interpreted - "when I know as I am known". This is certainly not true yet: it is obvious that when every Christian has this sort of knowledge, being told about it by others in prophecies will be redundant. He will need to know nothing more. Truly the perfect knowledge will have come and there will be no further need for prophecy. Thirdly such a view fails to see that the gift of tongues has nothing to do with the definition of knowledge about God. Even if such people were right in saying that prophecies and knowledge had ended with the completion of the New Testament, this could not affect tongues. Tongues are prayer to God (1 Cor 14:2), and are not for the purpose of providing men with propositional revelation about God. Clearly the time when tongues will pass away is when we shall see God face to face (1 Cor 13:12: who can claim this is true now?). With such perfected face to face knowledge we shall understand God Himself well enough not to have to express our prayers to God in the form of mysteries (1 Cor 14:2). Seeing God face to face, the limitations of language imposed by the confusion at Babel (Gen 11) will have come to an end, communication with God will have been perfected. This clearly remains in the future, since we do not yet know how to pray as we ought (Romans 8:26), but still see in a mirror dimly (1 Cor 13:12). Faith, hope and love will remain features for all eternity (1 Cor 13:13), because these provide the basis for our continuing relationship with God. Instruction in handling spiritual gifts in the church. Paul having laid out the principles that the gifts are for the purpose of building the church, rather as scaffolding is used during the construction of a building, he then goes on to show how those gifts can be used properly and in safety. They constitute a statement of good building practice: the requirements are however mandatory (1 Cor 14:37-38). This is similar to the statutorily backed advice issued by the Health and Safety Executive in Britain for the conduct of building operations. Genuineness of tongues never questioned Throughout 1 Cor 14, the genuineness of non understandable tongues is taken for granted, and Paul never suggests that the tongues being used at Corinth might not be genuine. Paul never even mentions the possibility of counterfeits! However, the code of practice laid down by Paul for the use of tongues is fully sufficient to ensure that any tongues which have an improper origin will be eliminated. Paul's approach should continue to be used in the church today: Paul specifically prohibits the prohibition of speaking in tongues (1 Cor 14:39). Many churches disobey scripture by prohibiting any use of tongues in their meetings: their purported reason, that the tongues are not genuine, cannot be justified from scripture. Normal church tongues are not understandable 1 Cor 14:2 acknowledges that a person speaking in tongues is speaking to God, that he speaks mysteries that no-one understands. 1 Cor 14:16 amplifies this statement, showing also that the content is thanksgiving. The tongues used in the church differ from those at Pentecost. The tongues at Pentecost glorified God in an understandable way, were languages which could be understood directly by men, and were spoken for men's benefit. By contrast, the speakers of tongues in the church are not directly understood by any one present, and they utter mysteries as prayer and thanksgiving towards God. Tongues is prayer, and spiritually strengthens the tongue speaker Paul expresses his preference for prophecy over tongues because prophecy has the power to edify more people (1 Cor 14:1-5). 1 Cor 14:3-4 shows that the purpose of prophecy is to edify the church, while the purpose of tongues is to edify (= build up, strengthen) the speaker himself. Clearly, unlike those at Pentecost, these tongues have nothing whatever to do with communication with men. Rather, the purpose of tongues, being prayer (1 Cor 14:2), is to strengthen the spirit of the believer, and so equip him for spiritual work. Just as is the case when praying in normal language, it is vitally important that the praying should be inspired by the Holy Spirit (Eph 6:18; Jude 1:20), and not be mechanical or the heaping up of empty phrases (Matt 6:7). Ideally, in a church context, the use of the tongue should ultimately lead to revelation which can be made known to all (1 Cor 14:13-16). Prophecy better than tongues for building the church, unless interpreted (1 Cor 14:5-12) As far as the church is concerned, prophecy is more use for edifying unless the tongue is interpreted (1 Cor 14:5), when it then has equivalent edification value to a prophecy. It should be noted that given that tongues are prayers (1 Cor 14:2) or thanksgivings (1 Cor 14:16), it would normally be expected that the interpretation would be in the same form, not a prophecy directed towards men. There are many sorts of sounds (1 Cor 14:10: a different word from "languages" used elsewhere), and none is without meaning but unless the language (1 Cor 14:11) is understood by the hearer it is of no help to him. Hence, Paul says, if believers are zealous for spiritual gifts, the exercise of prophecy is preferable as it builds up the church (1 Cor 14:12). Tongue speakers should seek the gift of interpreting their own tongues Because the edification value is increased by interpretation, 1 Cor 14:13 commands tongue speakers to pray for the power to interpret. The Greek implies that he should pray to be able to interpret it himself. Thus an obligation is being placed upon the tongue speaker to seek to be able to interpret his own utterances. A tongue speaker's spirit prays but his mind remains fallow while he speaks in tongues (1 Cor 14:14). Paul clearly does not regard it as desirable that this unfruitfulness of mind in the tongue speaker should continue (1 Cor 14:15). The reason for a tongue speaker to seek self interpretation is in order that the tongue speaker's own mind should not remain fallow. Then the speaker can follow his tongue by its interpretation and so edify the other hearers, who otherwise cannot join in their Amens to back up the tongue speaker's prayer (1 Cor 14:16-17). Paul's own extensive use of tongues, privately (1 Cor 14:18-19) Paul says that he speaks more in tongues than anybody at Corinth - but much prefers to use understood language in church (1 Cor 14:18-19). This is not, as sometimes claimed, a reference his natural linguistic ability, being used in evangelism. The immediately preceding context, especially his use of the personal pronoun I in 1 Cor 14:14-15, shows that he writing about his own use of unknown languages, which he used with the associated interpretation, in the manner he commended to the Corinthians. In 1 Cor 14:19 Paul is quite specific that the ten thousand words he might speak in a tongue would not be understood by anybody present. If Paul used tongues so extensively, where did Paul use his extensive gift of tongues? If it is not in church (1 Cor 14:19), and we have proved that they cannot have been used evangelistically, it must be that Paul uses this gift in his private praying. This is probably what empowered Paul's extensive prayer life and intercessory ministry, to which he repeatedly makes reference in his correspondence (Romans 1:9; Romans 10:1; 2 Cor 5:10; 2 Cor 13:7; Eph 1:6; Phil 1:4,9; Col 1:3,9; 1 Thess 1:2; 1 Thess 3:10; 1 Thess 5:23; 2 Thess 1:11; 2 Tim 1:3; 2 Tim 4:16; Philemon 1:4). Paul has made it clear that tongues strengthens the one who prays (1 Cor 14:2), and that through praying in tongues he receives the understanding of how to pray with his mind (1 Cor 14:13-15). The Corinthians were not mature in their thinking about using spiritual gifts (1 Cor 14:20), as the gifts were not being used in a way that will build the church, but rather destroy it. As strange tongues are a sign to unbelievers of their rejection by God (1 Cor 14:21-22), if everyone speaks in tongues at once (clearly this was a problem at Corinth) unbelievers and outsiders will be put off becoming Christians because they will say that Christians are mad (1 Cor 14:23). These verses therefore clearly prohibit the public use of tongues by everybody simultaneously. By contrast the use of prophecy discloses the heart of the unbeliever (1 Cor 14:24-25). Three stages are identified: he is convicted by all he hears, judged or called to account by all he hears, and the hidden things of his heart are disclosed (at least to himself). The reason for this is that if background items are given too explicitly (and are correct), then it becomes very difficult for the hearer to test objectively the "unknown" part of the prophecy. Moreover the Holy Spirit can bring true conviction without such pressure. Managing the church meeting Everyone should bring something to the meeting, those things which will build up other people (1 Cor 14:26). Managing tongues 1 Cor 14:27 regulates the use of tongues in a meeting. There are to be only two or three tongues, in turn, apparently then to be followed by a single interpretation. On the face of it this appears to limit the total number of tongues in the meeting as a whole, but this is not the only possibility. In the absence of someone able to interpret, tongues are not to be used aloud (1 Cor 14:28). Clearly Paul envisaged that people would know which people in the meeting could do this. Thus some people were known to be able to interpret tongues, and to be able to do it on every occasion that tongues is used. Managing prophecies In a similar way to the management of tongues, the number of consecutive prophecies is to be limited to two or three, and those prophecies are then to be tested (1 Cor 14:29). 1 Cor 14:31 says that all may prophesy one by one in the meeting, which makes it improbable that Paul is making a limitation on the total number of prophecies in a meeting. The meeting must then pause while this group of prophecies have then to be weighed (i.e. tested) before the meeting continues. Further prophecies (1 Cor 14:31) can be given later in the same meeting, but after each group of prophecies, the same assessment process must take place. This ensures that everybody is allowed to make their God-given contribution, but in a way which allows the church to assess each contribution while they still remember the details, and each person to be built up by them. Testing prophecy Weighing prophecies appears to be generally completely misunderstood today. Most people think it just means "receiving" what has been said. In fact the purpose is to test the prophecy, to keep whatever is good and to firmly reject what is bad, as commanded explicitly in 1 Thess 5:19-22. This should be done by open discussion, with detailed consideration of the arguments, with comparison with scripture and earlier revelations (compare, for example, Acts 11:1-18; Acts 15:1-29). All prophecy is imperfect (1 Cor 13:9) and the discussion process is important in identifying its deficiencies, and indeed whether it is a God-given message at all. The proper exercise of this process will in itself tend to purify the meeting of spurious prophecy. The process of discussion of the prophecies has the effect of making everyone think about them, and is an important part of making sure that the church does not miss the importance of what God is saying. The testing should not only be of what is said, but also of the spirit of the prophet himself, and how he reacts to the discussion process. The wisdom from above has the attributes in James 3:17-18, while worldly or devilish wisdom will reveal itself by its reactions to any criticism (James 3:14-16). 1 Cor 14:30 also shows that the general church practice of allowing preachers to continue without interruption, and without subsequent discussion of what they have said is not scriptural. All may contribute, one by one 1 Cor 14:31 says that all can prophesy one by one, but subject to the "continuous assessment" process for prophecies already noted. This verse shows clearly that Paul does not intend to limit the total number of prophecies in a meeting to two or three: that would be a mistake, as the devil has only to stir up that number of rubbish prophecies to prevent God's true word being brought by others. We should however be careful to consider whether our contribution will be helpful to teach and encourage others or best left until another occasion (1 Cor 14:31). True spirituality is under total conscious control: no confusion All God's true prophets are always in control of themselves (1 Cor 14:32). There is no place for any kind of compulsive behavior (such is demonic) or any other manifestations which are not under the conscious control of the Christian concerned. Any one who exhibits any manifestation and says they could not help their behavior proves they were not operating under the Holy Spirit. Prophets operate best in a disciplined meeting where everyone is fully paying attention to what is going on, and where only one thing happens at a time. This orderliness is also revealed by the fact that Paul expects the church to be sitting in an orderly way during the proceedings (1 Cor 14:30), and everyone present is under the conscious control of their own spirit. The underlying principle is that because God is not the author of confusion, there should be no confusion (1 Cor 14:33), with everything done decently and in order (1 Cor 14:40). Where there is confusion, we may be sure that an alien spirit is operating. This should be dealt with firmly by those with responsibility to lead the meeting. An example of conduct causing confusion. Women chattering amongst themselves, or calling out questions to husbands during the meeting (1 Cor 14:35) was not to be permitted because of the general air of confusion this caused (as it does even to this day in some Jewish synagogues). (We note in passing, that this passage may suggest that at Corinth the men and women sat separately). True spirituality acknowledges the rightness of these instructions. True spirituality is shown by acknowledging the rightness of these instructions given by Paul by command of the Lord (1 Cor 14:36-38). Those who rebel against these commands show they are "not recognized" (i.e. that they are not operating by God's Holy Spirit). A similar statement had been previously made to the Corinthians concerning other issues (1 Cor 11:17-19). Being "not recognized" means that such people are to be avoided since they do not hold to the teachings given by God (2 Thess 3:6, with 2 Thess 2:15). Prophecy preferred but tongues not banned Although prophecy is preferred in church meetings, tongues must not be forbidden (1 Cor 14:39), but everything should be done decently and in order. This attitude is interesting: Paul knew some would be tempted to ban the public use of tongues entirely, as being much the easiest option. Paul is determined that the Spirit should not be quenched (1 Thess 5:19): he recognizes the gift of unrecognizable languages as genuinely from God and knows personally just how useful it is. If tongues is banned in public, its use will soon die out in private, and intercession will wither. The spiritual gift of speaking in tongues has long been perhaps the most controversial of all the gifts of the Holy Spirit. This may well be because it is something obviously out of the ordinary, and which apparently serves no purpose as far as the natural man is concerned. By contrast, gifts like healing have an obvious value, while gifts like prophecy are often not perceived to be supernatural in origin. Some Christians seek to prevent tongues being used at all (certainly not in church), whilst others elevate tongues to be the test of true spirituality. As usual Truth lies between these extremes. The object of this paper is to reveal the true scriptural balance concerning tongues. It should be noted that the words used for "tongues" in Greek are the words ordinarily used for languages. An appendix deals with modern errors and other subsidiary matters. Tongues confined to the New Covenant Unlike all other of the spiritual gifts, the gift of tongues appears never to have been given to anyone during the period of the Old Covenant. This is itself highly significant, since it shows that the Church was given something entirely new, a special sign, at the Pentecost following the resurrection of Jesus Christ when the Holy Spirit was poured out upon the church. There is a possible Old Testament reference to tongues in Isaiah 28:11, though it would be unwise to say that this was intended to be the primary meaning of this scripture. Certainly 1 Cor 14:21 clearly refers to this verse in order to show that the use of tongues (without interpretation) is a judgment on unbelief. Tongues, a sign which follows those who believe (Mark 16:17)In Mark 16:17, tongues is one of five signs which follow those who believe (casting out devils in the name of Jesus, speaking in new tongues, taking up serpents, drinking deadly things without hurt, healing the sick by laying on of hands). Manifestation of these signs appears to depend upon believing and being baptized (Mark 16:16), tying up with Acts 2.38-42. Mark 16:17-18 clearly cannot merely be a promise that Christians will be good at learning foreign languages. There is no empirical evidence that Christians (in general) find it any easier than anyone else to learn foreign languages when they have to learn them in the usual way. Nor is it plausible that the ability to learn languages in the usual way would be regarded by anyone as a "sign" of belief in Christ. The word "new" (rather than "different" or "many") may be significant, suggesting that the languages may not be merely new to the speaker but could be entirely new, for the purpose of uttering mysteries (see 1 Cor 14:2). Because the list of signs given in Mark 16:17-18 seem a curious mixture, including as it does the ability to take up serpents and drink deadly poison, some people suppose that this justifies rejecting these scriptures. (These verses at the end of Mark 16 have sometimes been regarded as not part of the original text because it is omitted from some of the earliest (though not necessarily the most reliable) manuscripts. However most scholars now accept the validity of these verses.) Such an attitude is dangerous, and those who take this line often go on to reject other biblical teachings elsewhere which they find hard to understand. In favor of this text being genuine, and that Jesus had told his disciples to expect to speak in new tongues, is that when the believers were baptized in the Holy Spirit at Pentecost and spoke in other tongues, they do not themselves seem surprised by what was happening to them, and Peter was confident about explaining it all to the crowds (Acts 2). Finally we note that these signs are to follow those who believe, so that these signs are definitely unlimited in scope of time. There is no evidence here to suggest that such signs will ever cease from following those who believe. If such signs do fail, it simply reveals unbelief. Tongues on the Day of Pentecost (Acts 2:1-42).The speaking in tongues on the day of Pentecost is different in many ways from all the other cases of speaking in tongues recorded in scripture. Not only did they speak in languages which were known to the people who heard them, but it was accompanied by other signs: the sound as of a violent wind, and the tongues as of fire which descended upon each disciple (Acts 2:2-3). These features are absent from all subsequent recorded instances of being filled with the Holy Spirit and speaking in tongues. They are said here to speak in other (i.e. different) languages (Acts 2:4), contrasting with the "new" tongues of Mark 16:17. Acts 2:6 says that the people were attracted by "the happening of the sound". Although a different word for sound is used in Acts 2:2, it seems more probable that the people were initially attracted by the sound as of violent wind (Acts 2:2), rather than by the sound of the disciples speaking in tongues (Acts 2:4). Although the text gives the impression that the disciples were all speaking at much the same time, their speaking must have been sufficiently separate and distinct for each hearer to pick out his own language and hear it properly and clearly, and to hear its full content and meaning: the declaration of "the great deeds of God" (Acts 2:4-11). What people heard was in their own native dialect (Acts 2:6). This word for dialect is a different word from the word for languages used in Acts 2:4, and signifies correctness of accent as well of the words, so that each hearer heard the language just as they learned it from their mothers (Acts 2:8). This explains the hearer's surprise that all the speakers were Galileans (Acts 2:7), because when people learn languages naturally the accent is always "foreign", and explains whey the hearers felt the phenomenon required some explanation (Acts 2:12). Between them, the disciples spoke a wide variety of dialects: some of these dialects were of people close by (Judea) as well as people far away (Acts 2:9-11). Some commentators suggest that the tongues at Pentecost were not really understandable languages, and that the hearers' understanding of these tongues was a miracle of hearing rather than of speaking. They say that while some of the people (those with hearts open to God) heard the tongues as comprehensible languages, others were mocking unbelievers who thought them merely drunk (Acts 2:13). Although it may be true that mocking unbelievers may only hear "gabble" (an application of "He who has ears, let him hear", and a fulfillment of Isaiah 28:11), Acts 2:4 makes clear that the disciples were speaking in real languages which the Holy Spirit gave "them to speak out". Acts 2:4 shows that the language and its content were given by the Holy Spirit, but, as in 1 Cor 14:28, it was up to the speaker whether he spoke out or remained silent. Miraculous hearing is therefore not an adequate explanation of the fact that the tongues at Pentecost were understandable. Acts 2:14 onwards shows that all these people were perfectly capable of understanding Peter when he spoke to them in his normal Galilean accent, so the purpose of the miracle of tongues was not to enable the people to be evangelized. Peter goes on to preach powerfully concerning the prophecies concerning the death and resurrection of the Messiah, that they were witnesses of the fulfillment of these prophecies, and that in consequence they had received the promised Holy Spirit. Peter says they (the devout men present (Acts 2:5), Jews & proselytes (possibly even gentiles in view of "the temporarily residing Romans" Acts 2:10) there present, their children, and all far away - everyone that our God calls) can receive the same promise on condition of repentance and baptism (Acts 2:38-40). Although nothing is said specifically about exactly what the 3000 added that day received when they repented and were baptized (Acts 2:41) they must have received the fulfillment of Peter's promise. It is inconceivable that they received any less, since in Acts 8:14-17 the deficiencies of the experiences of the Samaritans were considered a serious problem which had to be remedied by an Apostolic visitation. The significance of Tongues as a sign of the New Covenant. Pentecost was an unusual feast as it used leavened bread (unlike the others which used unleavened bread), so it was appropriate that it was at this particular time that the Holy Spirit should be poured out on ordinary sinful people. Gift of tongues - sign of the end of Babel. The gift of tongues is a sign to the believer that God by the Holy Spirit has gained full control of his entire personality to obey God. The heart is filled with gladness when the Holy Spirit takes control (Acts 2:26), and out of the abundance of the heart the mouth speaks (Matt 12:34; Luke 6:45). James tells us that no human being can control the tongue (James 3:8) and that those who think themselves religious but cannot control their tongues are deceived, and their religion is not real (James 1:26). Everyone makes mistakes, and someone who makes no mistakes in what he says is a perfect man, who can control his whole body (James 3:2). None of us reach this standard, but the gift of tongues is like the bridle which controls a horse (James 3:3), or a rudder on a ship (James 3:4-5). But apart from God, this control is impossible (James 3:6-10): the control of the tongue that the Holy Spirit achieves in the gift of tongues shows that God has got hold of the entire personality. This control exceeds anything available in the Old Covenant (compare Jeremiah 31:31-34). Could the widespread objection to this gift of the Holy Spirit be the result of the total death of self-will which the exercise of the gift represents? Gentiles receive the Spirit (Acts 10). When Paul laid hands on these former disciples of John they spoke in tongues and prophesied (Acts 19:6). The form of the text suggests that the speaking in tongues and prophesying were two distinct activities. Tongues in the regular life of the church All the occasions of the use of tongues so far mentioned were one-off events, and did not form part of the regular life of the church. The main teaching on the use of tongues in normal church life is given in 1 Corinthians Chapters 12 to 14. Tongues is supernatural, not enhanced normal linguistic ability1 Cor 12:10,28,30 mention the gift of different sorts of tongues, and the gift of the interpretation of tongues. The distinction drawn between being able to speak in tongues and being able to interpret them (1 Cor 10,30) shows that these abilities bear no relationship to normal natural linguistic ability where speaking a language and understanding it are simply different aspects of the same ability. 1 Cor 12:28 emphasizes that these gifts are gifts to the church (rather than to individuals). Attitudes will determine how useful the gifts will be to the church1 Corinthians 13 is sometimes regarded as saying that all that matters is love and that the gifts can therefore be ignored as being of no consequence. This is not however Paul's intention: he is making clear that attitudes will determine how beneficial to the church those gifts will actually be. Gifts will eventually pass away, not because they lack value, but because they will have fulfilled their purpose (of building the church). Tongues may be those of angels1 Cor 13:1 refers to tongues of men and of angels: thus the tongues spoken are not necessarily the languages of men, but may be, in a real sense, languages of heaven. Gifts, including tongues, to be used with love. Paul's emphasis in 1 Cor 13:1 is on the right use of tongues - with love - otherwise whatever we say will will be a distracting clashing noise. This description of a clashing noise may possibly suggest that without love the tongue becomes meaningless (compare 1 Cor 14:7-9). Certainly it suggests that the tongue, being prayer, is only as good as the quality of our love towards God. Similar considerations apply to other gifts (1 Cor 13:2-3). The gift of tongues is a continuing gift to the church, showing the full control God has over the entire personality. It is a useful language for the believer to pray in, and builds up the spiritual strength of the individual believer who exercises the gift. He should however pray for the power to interpret his utterances so that his mind may also be fruitful. Used in this way the gift of tongues is a powerful aid to prayer and intercession. The use of the gift in church is to be permitted, subject to various instructions, detailed in 1 Corinthians 14, which ensure that the gift is used in a way which will build up the church. Tongues are sometimes said to have ceased. This theory is based on a narrow exegesis of 1 Cor 13. This was dealt with in the main paper. The idea that the reference to that which is perfect is to the completed bible hardly deserves refutation. Anyone who thinks he knows as he is known or already sees God face to face is deluded. The reason why we believe the scriptures are necessary and sufficient for salvation is that they were sufficient for that purpose for Christians of the first century. Today we can need no more than they did as far as salvation is concerned. The Apostles of Christ of that time were given all the revelation needed for that salvation to be made known to us (Matt 28:18-20 etc.). Many people claim that the "tongues" of today are not the same as those in the New Testament. This is a curious claim since there is no way that anyone who was not there can make such a statement. The claim seems to rest on two assumptions: that all N.T. tongues were in languages known to at least some of the people who were present. that what people speak today is "repetitive gabble". The first of these assumptions arises from assuming that the situation in Acts 2 was the biblical norm, when people heard the mighty works of God in their own language. This approach then assumes that "interpretation of tongues" is simply that someone in the meeting knows the language (naturally?) and says what the tongue meant. However Paul makes clear his view in 1 Cor 14:2 that tongues are mysteries spoken towards God that no man understands, showing that the norm for tongues was that they were not naturally understood by anyone present. The charge that all modern tongues is just gabble is dealt with above. It reveals unbelief in the heart of the hearer. Many people are filled with fear that they may receive "demonic tongues". Interestingly, there is no reference to such in scripture, even though such tongues occur in various non-Christian religions. Nor does Paul seem to think this a problem among Christians - even among such carnal ones as at Corinth - but then they were purified by persecution, and had been converted under the genuine gospel - aspects frequently not the case today. The answer is to ensure that no one attempts to receive the Baptism in the Holy Spirit in advance of genuine repentance towards God and faith in Jesus Christ, expressed by obedience in Baptism. No person should be prayed for in connection with this gift who is walking in knowing disobedience to the commands of God. In such circumstances will God give a stone instead of a fish (Luke 11:1-13)? Those who have received another spirit or another Jesus (2 Cor 11:4) should seek God to be freed from all which is not genuine, and seek to be fully obedient to the true Gospel of Christ. In these circumstances it is important to be sure not merely that the false spirits are removed from their lives, but that they receive the genuine Baptism of the Holy Spirit in its place (Matt 12:43-45). Some groups have completely rejected Paul's apostolic command not to all speak in tongues at once (1 Cor 14:23,27,37-38). Some argue that Paul is only talking about a situation where there are unbelievers present, and claim that it is perfectly acceptable if everyone present is a Christian. Others argue that it is perfectly acceptable provided it is the "overflow of the Spirit" (whatever that means). This seems to be in flat denial of the obligation for the spirit of the prophet to be subject to the prophet (i.e. that any Holy Spirit filled person can control what flows out from them) (1 Cor 14:32). Some have attempted to argue that the "prayer with one accord" in Acts 4:24 justifies the practice - that the only way they could have prayed thus was by praying in tongues all at once. Frankly this hardly requires refutation - tongues is not mentioned. Moreover the clear form of the prayer shows that it was coherent and in an understandable language. Praying with one accord simply means that everyone had the witness of the spirit concerning the prayers offered, irrespective of who it was who opened their mouth and spoke. :39 Wherefore, brethren, covet to prophesy, and forbid not to speak with tongues. Some groups have completely rejected Paul's apostolic command not to all speak in tongues at once (1 Cor 14:23,27,37-38). Some argue that Paul is only talking about a situation where there are unbelievers present, and claim that it is perfectly acceptable if everyone present is a Christian. But how can anyone be sure that there are no unbelievers present? Some have attempted to argue that the "prayer with one accord" in Acts 4:24 justifies the practice - that the only way they could have prayed thus was by praying in tongues all at once. Frankly this hardly requires refutation - tongues is not mentioned. Moreover the clear form of the prayer shows that it was coherent and in an understandable language. Praying with one accord simply means that everyone had the witness of the spirit concerning the prayers offered, irrespective of who it was who opened their mouth and spoke. Ezra 3:11-13 is prayed in aid of the same practice, again the suggestion that this has any relevance to praying in tongues altogether is ridiculous. These notes are not comprehensive but should stimulate personal bible study. Every effort has been made to be accurate, but the reader should test everything in accord with the example of Acts 17:11 and the command of 1 Thess 5:21. Errors, or queries which are unresolved after consulting the LORD, should be referred to the author: R H Johnston. © R H Johnston 13.8.1995 .This paper may only be copied in its entirety for private non-commercial use. All other usage requires the written permission of the author.
2019-04-26T14:24:12Z
http://armyofgod.com/Tongues1.html
72hrJetsetterGirl was keen to explore Poland in particular Krakow. In her research, she found that yes Krakow, is probably the jewel of Poland, however in terms of things to do and see in the former capital one day would be more than enough for 72hrJetsetterGirl. Whilst researching this adventure, the package deal to Warsaw was significantly cheaper than to Krakow, another reason to choose Warsaw. From Warsaw day trips to Krakow and Auschwitz could be arranged. In the planning stages, 72hrJetsetterGirl had found tour companies that offer these day trips for less than US$100. Ok theoretically everything as sorted. A couple of weeks, prior to jumping on the Air France flight from Dulles International Airport to the Polish Capital, 72hrJetsetterGirl thought about planning those side adventures. You remember that US$100 tour to Auschwitz, well when 72hrJetsetterGirl entered the intended day of the tour and the number attending, just one – Guess what … the cost jumped up by 400% to US$400 WHAT THE!! After searching a couple of other travel websites for a tour to Auschwitz from Warsaw, the same thing happens. OK, it must be going rate. Krakow is about a 3.5-hour high speed train ride from Warsaw, yes it would be a very long day, so 72hrJetsetterGirl contacted the travel company to see if they could accommodate Krakow and Auschwitz on the same day. Yes, they were very accommodating and able to do so. In the planning stage 72hrJetsetterGirl thought that whilst she was in that part of the world, lets also include a trip to The Wieliczka Salt Mine, and this will mean overnighting Krakow. With this addition side trip, it was still more cost effective to fly in and out of Warsaw. Everything was now sorted! The transit from Chopin International Airport to 72hrJetsetterGirl’s accommodation was a piece of cake. Warsaw has a very modern, efficient and cost-effective airport transportation system for travelers. See details in the Nuts and Bolt section of this blog. After checking into her 4-star accommodation, it was now time to explore the city. The accommodation was situated in the Embassy area and approximately a 25-minute walk to the Old Town district. Yes, the Australian Embassy was just around the corner from 72hrJetsetterGirl’s hotel. Hopefully she will not be calling upon their services during this adventure. It is always nice to see the Aussie flag blowing in the breeze when abroad. To get acquainted with Warsaw and to keep the jet lag at bay, 72hrJetsetterGirl opted for a free walking tour of the Warsaw Jewish Ghetto. There is nothing like the flight touching down at 11.30am and then participate on a walking tour starting at 2pm. Yes, 72hrJetsetterGirl certainly tries to do and see as much as she can within her 72 hour adventures and quickly made her way to Sigismund’s Column in the main square of Warsaw for the tour. The temperature in Warsaw whilst 72hrJetsetterGirl was in town was hitting around 38degC, probably not the best conditions to be out walking in the sun. Oh, a hat, sunnies and a water is all you need! Patricia the guide who is a native Varsovian and fluent in four languages (Polish, English, Spanish and Italian) started with a brief overview of what the tour will be covering in the next 2 or so hours and will do her best to keep us in the shade as much as possible. Man, you have to give it to these Europeans they are usually fluent in more than 2 languages, yes the majority of us native English speakers certainly lag behind in the linguistic area. First stop on the tour, was in front of the Monument to the Heroes of Warsaw also know as the Warsaw Nike (Just do it! LOL!). The monument commemorates all those who died in the city from 1939 to 1945 in the defense of Warsaw (1939) and the participants of the Warsaw Ghetto Uprising and the Warsaw Uprising as well as for the victims who suffered by the German terror. The biggest challenge for this monument was casting the sword which weighs about 1000 kg. The special design of steel bars embedded inside it makes it very strong and if it is a bit breezy, the sword can sway from its position by up to 15 cm… mmm lucky 72hrJetsetterGirl was viewing the monument from across a busy road. Wandering down a once bustling street in the Ghetto, 72hrJetsetterGirl recognized an Aussie accent in the tour group and was introduced to Tegan and Jono from Newcastle (can’t get more Australian than Jono!). Learning that the traveling Aussie duo were on a 12 month “working adventure” in Europe and they were very keen to shared their recent travel experience with 72hrJetsetterGirl. Yes, the story of the dreaded bed bugs in the hostel dorms. Oh, the things you remember when on a tour. Throughout the tour, Patricia explained the horrific living conditions in the Ghetto as the entry gates were guarded by the Nazis, and therefore they reduced the amount of supplies (food and medical) that were allowed into the Ghetto. The recommended daily calorie intake is around 1500 calories per day, those living in the Ghetto were given 650, which resulted in starvation and the spread of disease. Just terrible conditions. Patricia then went onto advise that people could escape from the Ghetto however, remaining outside of the perimeter walls was the test and few managed to remain alive. The details that the tour guide provided were in-comprehensive for group. 72hrJetsetterGirl had tears of sorrow running down her face just as the majority of others in the group upon hearing the atrocities that occurred to fellow human beings. The mood of the tour group was certainly very somber, as the realization of what occurred between 1940-1943 in this part of the world is still to this day very, very hard to comprehend. Patricia made reference to two very important people who tried to spread the word to the West on what was actually happening in Warsaw during this period of history. Firstly, Jan Karski, has been christened the “man who tried to stop the holocaust”. Irena Sendlen was nominated for the Noble Peace Prize due to her actions to save children by smuggling them out of the Ghetto by hiding them in ambulances, taking them through underground passageways or wheeling them out in suitcases or boxes to safety. Irena made notes of the children’s name and stored the information on little pieces of paper in canisters. 72hrJetsetterGirl hopes that the time will come that the world officially recognizes the courageous and heroic efforts of Jan and Irena. Before reaching the final destination of the walking tour, Patricia educated the group on Ludwik Zamenhof, a Polish-Jewish medical doctor, inventor and writer and creator of the most successful constructed language in the world “Esperanto“. The murals depicts common expressions in Esperanto. Also around this area, the housing apartments are raised significantly above street level. As the Ghetto was reduced to rumble, instead of removing the rumble, apartments were just built on top. Even today, local residents tell of “ghostly” occurrences happening in the area. Ludwik Zamenhof, creator of the most successful constructed language in the world “Esperanto“. The last stop on the tour was in front of the POLIN – Museum of the History of Polish Jews and the Jewish Ghetto Memorial. After a very informative and heart felt tour, it was time to reward Patricia for her services and in return a useful map/guide was given to all. This useful pocket guide will be very handy for 72hrJetsetterGirl to navigate her way around Warsaw. 72hrJetsetterGirl bid farewell to Tegan and Jono and wish them well on for their continued adventures around Europe and hope they do not have any more encounters with the dreaded bed bugs. As 72hrJetsetterGirl skipped lunch it was now time to taste some traditional Polish cuisine in New Town. Just for the record, New Town is 100 years younger than Old Town which is about 700 years old. Whilst browsing the menu, it dawned on 72hrJetsetterGirl that Polish cuisine is not what you would call “light” or suitable for someone watching their waist line (more like watching the waist line expand). That being said 72hrJetsetterGirl settled for Placek po zbojnicku (Potato pancake with spicy pork goulash) and to wash it down with a local draught beer. Taking a conservative approach, 72hrJetsetterGirl opted for the 0.33l size of ale instead of 0.50l – it’s all about moderation. From reading history books, 72hrJetsetterGirl knew that the locals of Warsaw were not the type to sit back and wait out the Nazi occupation. As the Warsaw Uprising enraged Hitler, his retribution was brutal by destroying anything of cultural importance and setting whole districts on fire. When liberation occurred over 90% of the city lay in ruins. Today, due to the indefatigable spirit of the Polish people, Rynek Starego Miasta has been rebuilt by using paintings and photographs as architectural blueprints to recreate the burgher style houses that once framed the Old Town Square. This work has been recognized and the Square is now an UNESCO World Heritage site. Warsaw is now the old new town of Europe. As the sun was setting in the West, 72hrJetsetterGirl climbed the stairs to the viewing platform at the top of St Anne’s Church for a panoramic view of Warsaw Old Town with the surrounding walls and the Royal Castle. Which was also re-built after the war. It was now time for 72hrJetsetterGirl to wander down Krakowskie Przedmiescie pass the Presidential Palace to the fake “Palm Tree” and hang a right along the main street of Al Jerozolimskie back to her hotel. The fake Palm tree is a great landmark for directions but really, a Palm Tree in Poland, it is certainly not Dubai! Across the road from 72hrJetsetterGirl’s hotel is Poland’s tallest and largest structure, standing at just over 231 meters high – the Palace of Culture and Science. This architecture building was commissioned by Stalin as “a gift from the Soviet People” to the people of Warsaw in 1955. It is estimated that over 40 million bricks were used in its construction. As 72hrJetsetterGirl enjoyed the Warsaw Jewish Ghetto walking tour, she signed up for the Old Town Walking tour. Meeting at Zygmunt’s Column, she was greeted by another cheerful guide named B. As pronouncing B’s Polish name is a bit challenging for a non-Polish speaker, so he prefers to go by B to make it easier for us tourists. B, was certainly a very passionate Varsovian and firmly believed that Zygmunt made the right decision back in 1596 to move the capital of Poland to Warsaw from Krakow. Nothing like a bit of intercity rivalry. The first stop on B’s tour was the Royal Castle which was rebuilt from 1971-1984. As the group meandered down the cobble stones alleyways the next stop was at St John’s Cathedral. As Pope John Paul II, was a Pole, he conducted his first official tour as Pope from Warsaw, which irked the residents of Krakow as the Pope had spent the last 4 decades living in the town in south west Poland. B was certainly gloating at this stage. A plaque is located on the walls of the cathedral to commemorate this event. 72hrJetsetterGirl though that the plaque looked like a cable network dish rather than something of religious importance. Oh, suppose each to their own. Of course, you cannot have an Old Town walking tour and not visit the main focal place of the town – the Square. B provided a narrative on the significance of the mermaid statue located in the centre of Rynek Starego Miasta. Leaving the Old Town, and walking by the City Walls to New Town, B pointed out to the group a “milk bar”. The milk bars are remnants of former Soviet life. The milk bars sold milk and egg products and today offer good value for money meals, with the main source of the meals being milk or egg based. I thought she was French! During tours, a lot of information is given and to be honest 72hrJetsetterGirl fails to recall every single fact, however her ears certainly pricked up when reaching the next information point. Marie Curie was Polish not French! B pointed out the house where the two time Nobel Prize winner was born and now houses the Marie Curie Museum. New Town borders the Jewish Ghetto and B gave the group an overview of the Ghetto before proceeding to the memorial for the Warsaw Uprising. The Warsaw Uprising took place in 1944 and became both the most glorious and tragic episode of the city’s history. With all this walking, 72hrJetsetterGirl was ready for some late lunch. Once again keen to try regional Polish cuisine, this time she ordered homemade fried dumplings (meat, potato, onions) served on a hot frying pan with sour cream on the side. Of course, not to disappoint,72hrJetsetterGirl’s beverage of choice was a local beer, still keeping with 0.33l size. 72hrJetSetterGirl noted an unique statue nearby. Maly Powstaniec is a statue in commemoration of the child soldiers who fought and died during the Warsaw Uprising of 1944. As 72hrJetsetterGirl loves a good palace, she was keen to visit Warsaw’s Imperial beauty – Wilanow Palace – dubbed “the Polish Versailles” which was a 30-minute bus ride from the Old Town. However, the skies of Warsaw decided to open up and drench the cobblestone alleyways and 72hrJetsetterGirl rethinking her touristic options, decided to go to Warsaw Uprising Museum instead. The museum claimed to be one of Poland’s best was about a 25-minute walk from her hotel in the opposite direction to Old Town. Surprisingly there was a minimal wait time to purchase the entry ticket for the museum. Once inside the museum, 72hrJetsetterGirl was certainly overwhelmed as it was packed with many people and filled with interactive displays, photographs, footage and exhibits. Not electing to use the audio guide, 72hrJetsetterGirl navigated her way around the museum the old fashion way with a floorplan guide. Yes, the museum provides a good description/overview of the Uprising, the role of the Allies (or lack you could say) and an explanation on why Warsaw was bombed to rubble. In her research, 72hrJetsetterGirl did read in the tourist forums, that the direction indicators in the museum, ie. sequence of the exhibits and the follow from one level to another was not that clear and made the visit a little more challenging. On the whole the museum is very well done, however once again the over load of information is sometimes a little harder to retain in a short period of time. Or has 72hrJetsetterGirl had too many beers!! After leaving the Uprising Museum and feeling sad, 72hrJetsetterGirl made her way along Prosta Swietokrzyska to Zlote Tarasy for some retail therapy. The Golden Terraces as it is also known as is the most significant shopping center in Warsaw which is located in the very heart of the city. The shopping centre has all the major retail brands that are available in the US. After walking the multi levels of the shopping complex, 72hrJetsetterGirl found a supermarket on the lower level where she could pick up some supplies for dinner, at this stage all she wanted was a garden salad to take back to her hotel room as she had very early morning start for her trip to Krakow. 72hrJetsetterGirl had enjoyed her time visiting Warsaw and Krakow, even thought it was quite an emotional adventure, she was now ready to head back to Washington DC, to start work the next day. Arriving at Chopin International Airport in sufficient time and had received notification from her travel app that the flight from Warsaw to Paris was “looking good” (code for on time). When presenting her travel documentation to the check in agent, she was quickly advised to go the Air France Customer Service desk as there has been a change. Actually, that travel app had it all wrong…. The flight from Warsaw to Paris was delayed by 2 hours and hence 72hrJetsetterGirl would miss her connection in Paris for the onward 7-hour flight back to the US. The helpful Air France staff, had transferred her to a Lufthansa flight which was leaving earlier and instead of transiting through Paris, it was now Munich and arriving back in Washington 15 minutes later – not big deal. Everything was now OK, well that is 72hrJetsetterGirl thought. As 72hrJetsetterGirl made her way to the Lufthansa check in counter she was promptly advised that they will not accept her, as their flight out of Warsaw was also delayed and they did not want to handle the possibility of a misconnection in Munich. Back to the Air France counter. After 35 minutes waiting, whilst the agent tried various other routes, 72hrJetsetterGirl was advised that she won’t be heading back home today, either overnight in Warsaw or go to Paris and overnight there. Off to Paris 72hrJetsetterGirl goes. Whilst waiting on the Tarmac, the captain comes over the pa system in a very calm and captain like manner advising the passengers the reasons for the delay. As it was a public holiday in Poland, they were holding a military air show, hence all commercial flights in and out of Warsaw were disrupted, along with bad weather over Germany and a mechanical issue, the flight will be arriving 2 1/2 hrs behind schedule. Upon arrival in Charles DeGaulle Rossiy airport, 72hrJetsetterGirl was provided with accommodation, transfers and meals vouchers and was booked on the flight back to the District for the following day. A night in Paris?? No it was not that glamorous – airport hotel right under the flight path – very little sleep. On the upside, 72hrJetsetterGirl received an upgrade for her flight from Paris to Washington as compensation for the inconvenience. Hey this all part of the traveling experience, just need to roll with it. Krakow was on 72hrJetsetterGirl’s bucket list and after doing some research it was more economical to base herself in Warsaw. An opportunity to explore another city. A day trip to Krakow from Warsaw is certainly a very early morning start with a very late finish in the evening. As 72hrJetsetterGirl does not have the luxury of added vacation days, (she does work) the plan put together by the travel agent was well worth the money, to take visit Auschwitz, the Salt Mines and a guided tour of Krakow. If you have time, yes of course you could have planned the adventure yourself. Unfortunately, due to the extended stay in Krakow, 72hrJetsetterGirl did not have the opportunity to visit the Warsaw Zoo. Just a note: Even though Poland is part of the EU, they have not converted their currency to the Euro as yet and uses the PLN. Schengen immigration policies apply for Poland. 4 nights, with 3 1/2 days for sightseeing. Departed Washington DC on Thursday evening, arriving into Paris Friday morning with about a 90 minute layover and then into Warsaw around 11.30am. It was interesting, as 72hrJetsetterGirl and the other passengers were disembarking the plane in Warsaw, Polish Police Officers were at the plane’s door for a passport inspection. Originally planned to depart Warsaw around lunchtime on Tuesday with a 80 minute layover in Paris before connecting back to the US. However due the travel delays, overnighted in Paris an additional night. The 20-minute train journey in a modern and efficient train from the airport to Warsaw Central train station costs 4.40zt (or approximately 1 euro). 72hrJetsetterGirl’s hotel was located within a 2-minute walk from the Central Station and opposite the most well-known landmark of the city – The Palace of Culture and Science. It was a 25-minute walk to Old Town from the hotel and about 10-minute walk to the City Centre. Definitely check the weather guide before going as 72hrJetsetterGirl adventure occurred during an extreme heatwave. For females would not recommend wearing any form of heeled shoe as the cobbled streets makes walking a little bit more challenging in the Old Town area. Casual attire would be more than appropriate. The trip to Warsaw was certainly an emotional adventure for 72hrJetsetterGirl, given the events of World War II and the devastation of Jews and the destruction of the city of Warsaw. As this is part of history, it is important that we remember those who suffered and hope that these terrible acts of crime are never never ever repeated again. Whilst planning for her St Petersburg (that is Russia, not Florida USA!) adventure, 72hrJetsetterGirl’s Estonian co-worker suggested going in the summer to visit the magnificent Tsar palaces of Pavlovsk and Catherine, located a stone’s throw away from the former Russian Capital – St Petersburg, previously known as Leningrad. As a Baltic native and fluent Russian speaker, the co-worker suggested to 72hrJetsetterGirl to organize a tour of the palaces as very little English is spoken outside of the city limits and navigating her way there might prove to be a bit of a challenge as there are no train, metro or direct bus services available. Even though 72hrJetsetterGirl is always up for a challenge, probably this is not the time to rely upon her Google translate app. As beautiful as the Cyrillic alphabet is, it may be a touch challenging to enter the characters onto a keypad of a small phone (yes you can read iPhone4 here). Taking this advice on board, a day tour was booked prior to leaving ‘Merica. Whilst waiting to be collected from her apartment just off Nevsky Prospect, 72hrJetsetterGirl assumed that she was part of a small tour group to visit Catherine and Pavlovsk Palaces. You know what they say…when you assume, you make an a$$ of yourself; well this happened to 72hrJetsetterGirl. No small tour group; but her own personal tour guide, with her own personal driver in a Mercedes Benz. SWEET…As this outing occurred on 72hrJetsetterGirl’s birthday, she certainly felt like a Russian Princess for the day! After the introductions, Alex the driver, found out that 72hrJetsetterGirl was an Aussie and had great pleasure in telling her that the Bee Gees are his all time favourite band! Therefore the music of choice for the day, was that of the British trio brothers who immigrated to Brisbane in the 60s and took the world by storm with their 3 part tight harmonies. For the rest of the day 72hrJetsetterGirl wanted to break out to John Travolta’s “Saturday Night Fever” moves and reach the vocal stratosphere of Barry Gibb’s “Stayin’ Alive” and for all concerned it would have been a “Tragedy” to witness. The private tour headed south on the St Petersburg’s motorway for about 30 kms to the palaces located in the municipal town of Pushkin. On the way, Irina the guide, provided a very comprehensive history lesson of the Russian dynasty – the House of Romanov. 72hrJetsetterGirl’s head was spinning with Catherine the Great, Peter the Great, Catherine I, time to purchase a family tree chart. Pavlovsk Palace is located 4 kms down the road from Catherine Palace and Irina advised 72hrJetsetterGirl that it was better to go to Pavlovsk Palace before Catherine Palace, thinking to herself for historical importance. No, that was not actually the case. Irina advised, tongue in cheek, that the crowds at Catherine Palace will be lower after lunch as the river boat cruises have their excursion there in the mornings and they would have departed St Petersburg onto to their next European destination by the time we arrive in the afternoon. 72hrJetsetterGirl quickly learned that Irina was a character and certainly appreciated her eagerness to please her client on this tour. Catherine the Great (also known as Catherine II of Russia) gave a parcel of a thousand hectares of forest along the winding Slavyanka River to her son Grand Duke Paul (Paul I) and his wife Maria Feodorovna. Pavlovsk was built in 1777 to celebrate the birth of Paul’s and Maria’s first born son (Alexander I of Russia). Getting the drift why 72hrJetsetterGirl was getting completely confused about all the titles! Pavlovsk was initially designed by Catherine the Great’s official architect Scotsman Charles Cameron. Cameron’s concept was to design a Palace in the Palladian style. However there was some tension regarding the design style between the Scotsman and Russian Royalty owners. This tension lead to the parting of the ways in 1786 and a new architect was appointed, Italian Vincenzo Brenna. Brenna’ style reflected Paul and Maria’s preferred taste of Roman classicism. History lesson in Russian royalty, Catherine the Great died in 1796 and Paul became Emperor. He enlarged Pavlovsk, then was murdered by his court in 1801 and his son Alexander became Emperor. Maria remained in the palace and created a shrine to her late husband. This is 72hrJetsetterGirl’s concise version of the Romanov dynasty. Irina also advised the history of Pavlovsk Palace. After the October Revolution (1917), Pavlovsk was converted into an art and history museum and open to the public. During WWII, Pavlovsk Palace suffered tremendous damage due to the Siege of Leningrad. Fortunately, the curators of the museum were able to remove all valuable objects to a safe haven. The town of Pushkin was occupied by the Nazis, who plundered the palace, destroyed many garden pavilions and knocked down 70,000 trees before retreating and burning the palace down. Restoration and reconstruction work commenced after the war and is now close to completion. Today, the palace and surrounding English Gardens are now a Russian State Museum and public park. Upon arrival at the 18th Century Russian Imperial residence, Irina used her many years as a personal guide to skillfully navigate 72hrJetsetterGirl to the front of the line, to out wit the flag waving Chinese tour guides. Otherwise 72hrJetsetterGirl’s experience at Pavlovsk might have been many hits to the head by the Chinese welding selfie-stick users. Once inside the Palace, 72hrJetsetterGirl’s jaw dropped. From the outside Pavlovsk, is what you might call an understated palace – no big WOW factor. As Irina guided 72hrJetsetterGirl through the 40 room palace, Brenna’s influence became very evident. The first room was the circular Italian Hall with its sky high dome, chandelier hanging from the ceiling and Roman like statues located in each of the alcoves. Next stop in the 18th Century Russian Imperial residence was the Greek Hall with its roman columns and glass encased chandeliers – a perfect room for dancing the night away. The other key rooms in Pavlovsk Palace are the Throne room, the Chapel, library, dining room and state bedroom. The palace has an excellent art collection in particular the painting of “Cupid Shooting a Bow” by Carle van Loo (1761). The eyes of cupid follows you around the entire room. The tour took about 90 minutes. Before exploring the English Gardens of the Palace, 72hrJetsetterGirl wanted to purchase a small memento (aka a magnet) of her visit to Pavlovsk in the state run gift shop. Whilst browsing through the gift shop, 72hrJetsetterGirl mentioned to Irina, that a book in the gift shop was half the price of what she had seen in a bookshop in St Petersburg. Irina’s response, who appeared to be pro-democracy by her previous comments, stated that was one of the benefits of living under a state run government that the prices were all the same no matter where you shopped. This comment reinforced to 72hrJetsetterGirl that for a Russian coming out of a state run economy into a “free market” must have been a challenging transition. With storm clouds threatening, 72hrJetsetterGirl and Irina were able to wander around a small portion of the thousand hectares of Pavlovsk’s English Gardens to admire its beauty. After exploring the gardens, Irina summoned Alex to bring the car around for the next destination on the day’s itinerary – lunch! During lunch 72hrJetsetterGirl had an opportunity to get to know Irina in a more personal way. Irina spent her early years in Germany and after WWII, her family immigrated to Russia and settled in St Petersburg. A daughter of a doctor, Irina explained her life under communist rule as she entered university to study law. Today Irina practices law as a legal aid lawyer and guides tourists around her beloved St Petersburg. After consuming a 5 course Russian lunch, 72hrJetsetterGirl was ready for a Nanna nap, however Irina was keen to keep this adventure going and show 72hrJetsetterGirl, the piece de residence of Palaces – Catherine Palace. Irina navigated 72hrJetsetterGirl to a side entrance of the palace to avoid the long queues and expedited a speedy arrival at the front door. This enable 72hrJetsetterGirl to witness a different side of the palace, which many tourists do not have the opportunity to witness. Throughout the day, Irina was always offering to take happy snaps of 72hrJetsetterGirl as treasured memories of her visit. This 325 metre long Rococo palace located in the town of Tsarskoye Selo (Pushkin) was the summer residence of the Russian Tsars. Catherine I of Russia (no – not Catherine the Great aka Catherine II, but Catherine, the second wife of Peter the Great) hired a German architect in 1717 to construct a modest two-storey summer palace for her pleasure. The sheer grandeur of the palace can be attributed to Catherine’s daughter, Empress Elizabeth. Elizabeth chose the palace as her chief summer residence and commissioned 4 different architects during its construction. They were instructed (probably ordered) to completely redesign the building to rival France’s Versailles. Nothing like a bit of royalty competition between monarchs. To say that 72hrJetsetterGirl was totally gob-smacked when she arrived at the entrance was a complete understatement. Irina informed that Catherine I was not very happy with Elizabeth. During Elizabeth’s reign, over 100kg of gold was used to decorate the palace exteriors. Mum, was totally deplored when she discovered the state and private funds that had been lavished on the building. Just like Pavlovsk Palace, when the German troops retreated after the Siege of Leningrad, they intentionally destroyed the residence leaving only the hollow shell behind. Luckily, prior to WWII Soviet archivists managed to document the interior of the palace, which played a critical role in its reconstruction. After slipping on their shoe covers, Irina took charge of the situation yet again to ensure that 72hrJetsetterGirl had a wonderful experience exploring Catherine Palace. Upon entry, the interior of Palace did not fail disappoint – no less spectacular that the exterior. The State Staircase with its ornate banister and marble cupids gave 72hrJetsetterGirl a taste of what is to come. The 1,000sq metre Great Hall, also known as the Hall of Light, is absolutely magnificent with its gilded stucco decorating the walls and has superb views of the palace grounds. Other highlights of this Grand Enfilade include the Portrait Hall, which contains portraits of both Catherine and Elizabeth, the Picture Gallery and of course the legendary Amber Room. The Amber Room was recreated in 1982 and the process took over 20 years to complete and cost more than $12M. This restored room is truly exquisite and testament to those craft-persons who restored it. Due to this reason, no photography is allowed in the Amber Room. 72hrJetsetterGirl’s photos were taken from the doorways of the adjoining room. After visiting the truly beautiful Catherine Palace, it was time to get some fresh air and explore Catherine Park. 72hrJetsetterGirl strolled along the garden alleys taking in the beauty of the marble statues, waterfalls, pavilions and ponds within this beautiful park. After spending a fascinating day exploring the municipal town of Pushkin, it was now time to head back to St Petersburg. As the traveling trio made their way along the St Petersburg motorway, 72hrJetsetterGirl asked Irina about a colourful, also most Willy Wonker looking like church she had seen on a poster at the metro station. Irina did not need a moment to think of the name of the church, she knew exactly the church 72hrJetsetterGirl was referring to and ask Alex if it was possible to make a brief stop at that church as it was on the way back to St Petersburg. Alex had no hesitation in making a detour for 72hrJetsetterGirl. Unbeknown to 72hrJetsetterGirl, the neo-gothic Cheseme Church was built at the direction of Catherine the Great in 1780 and like most things in St Petersburg during WW2, the church was damaged during the siege. According to Google (Google is certainly 72hrJetsetterGirl’s go to reference guide), Church of St John the Baptist Chesme is considered the single most impressive church in St Petersburg and is a rare example of very early Gothic revival in Russian church architecture. As the church is not situated in the main tourist district of St Petersburg, it would have probably been mission impossible for 72hrJetsetterGirl to navigate her own way there. However, on this occasion the stars were aligned, or Alex and Irina were aligned to ensure 72hrJetsetterGirl had a wonderful day touring their home town. After a very informative and enjoyable day it was time for 72hrJetsetterGirl to bid farewell to Alex and Irina in St Petersburg. However, 72hrJetsetterGirl and Irina will be reconnecting again to visit Peterhof, another royal palace in St Petersburg. Catherine and Pavlovsk Palaces are certainly a must see for any tourist vising St Petersburg. When booking the tour back in America, 72hrJetsetterGirl did think the cost of the tour was on the high side as she thought she would be a participant on 50+ person bus tour. So with the advice from her co-worker and the very slim possibility of her returning back to Russia, she decided to make the booking even given the cost. In hindsight, it was the best decision. Private tour guide, own driver, lunch and the ability to tailor the day to her requirements was certainly USDs well spent. As 72hrJetsetterGirl was travelling solo she had the opportunity to connect with Irina in a more personal way and develop a special friendship. Whilst wandering around St Petersburg, 72hrJetsetterGirl did see tourist operators on Nevsky Prospect promoting tours to the royal palaces, however the tours were only conducted in Russian. If you are visiting Russia and do not speak Russian, 72hrJetsetterGirl certainly recommends that you book excursions prior to leaving your home country. The craftsmanship of the restoration work of the palaces are absolutely superb and the sheer extravagance of them are absolutely mind-blowing. A definite must been seen when visiting St Petersburg. When have you been totally blown away by the sheer extravagance of an attraction? Iceland – Experiences are the new possessions! One snowy Sunday afternoon in Washington, DC, 72hrJetsetterGirl was surfing the net in search of a deal for her next “tropical” destination and up popped a good value independent package to Iceland. Mmmm, another cold and snowy location she thought, but the travel dates were the “right” time to see the Northern Lights. Ok we all know that Mother Nature can have a mind of her own! Tentatively she hovered the cursor over the link with a sense of anticipation, thinking it won’t hurt to look. Then thought what the heck just do it! 72hrJetsetterGirl just could not pass up the opportunity to see aurora borealis. A few days before departing to the world’s northernmost capital of a sovereign state, 72hrJetsetterGirl searched her reservation on the airline’s on-line system. Entered her surname and reservation code and then in big RED letters received an error message saying “unable to find this reservation”. Try again, this time she hit the keys with a bit more force (not sure how this approach would help!) and received the exact same message. What is going on she thought as a wave of fear came over her. “Keep calm and phone the airline”. After a discussion with the airline agent, 72hrJetsetterGirl had incorrectly typed her surname when making the original booking, some time back. Now the name on the e-ticket did not match the name in the passport – HUGE BLUNDER and then incurred a change fee of US$50 because of her fat fingers! An expensive lesson learnt. 72hrJetsetterGirl’s first Icelandic’s experience was an introduction to Reykjavik, otherwise known as 101 (its postal code). 101 is a petite city filled with colour and character that can be easily explored on foot. One of the tallest concrete structure (73m) and the largest Lutheran church in Iceland is Hallgrimskirkja Church which offers panoramic views of the city. In front of the church is a statue of Leifur Eiriksson, the first European to discover America. Records suggest that he landed on American soil 500 years before Columbus. The church has an impressive gargantuan pipe organ and the elevator ride to the top is definitely worth the small fee. Whilst admiring the city’s colourful roofs from the top of the Church, 72hrJetsetterGirl met two other travelers who were also exploring this little city with a “big heart”. To christen this new traveling friendship, the trio decided to celebrate in one of the many local craft breweries in town. Check #funfact to find out about Iceland’s beer ban. After a refreshing ale, the three new traveling buddies paid a visit to Höfði House, the house in which Gorby and Reagan met in 1986 for the Iceland Summit; a quick stop at the rotating Perlan glass dome with sweeping views of the sea and hills and a visit to the Sculpture Gardens of Asunder Sveinsson. An attraction not to be missed in Reykjavik is Harpa – the Country’s concert and conference center. The center has a distinctive coloured glass façade inspired by the basalt landscape of Iceland. Very impressive. Anyone for Puffin or Minke? The new travel buddies found themselves wandering around the ferry terminal area of 101 and stumbled upon a cosy and cute fish-market where the Icelandic delicacy of minke whale was on offer. Debating whether to have minke, salmon or puffin (yes Puffin), another couple overheard the predicament and joined in on the quandary. Just to set the record straight 72hrJetsetterGirl settled for salmon, the thought of chowing down on “Free Willy” was just too much to bear. One became three and then three became five and before you knew it the new buddies were on their way across the bay to Viðey Island to tour the Imagine Peace Tower. The tower is a beam of light from a wishing well bearing the words “imagine peace” in 24 languages. The “Tower of Light” was Yoko Ono’s tribute to her late husband John Lennon. Peace Tower – “imagine peace” in 24 languages! The trip to Viðey Island was certainly a stroke of serendipity for 72hrJetsetterGirl. Unaware of the tribute and if she had not met her traveling buddies she would not have visited the tower nor had witnessed the amazing light show of the Northern Lights. A smile is all it takes to make new friends and experience new adventures. When in Rome do as the Romans do! Right? Well Reykjavik has the reputation of having a vibrant nightlife that starts late and carries on long into the early morning hours. The five new travel buddies deserved to have some Icelandic’s fun after a very busy day exploring. As 72hrJetsetterGirl was in Iceland during the winter months, it was more appropriate for her to join a small tour group to explore beyond the city limits of Reykjavik. Today’s adventure was the popular Golden Circle tourist route. This picturesque 300km loop starts in Reykjavik and then routes its way to the southern uplands of Iceland and back. During the day, 72hrJetsetterGirl was in awe of the stunning landscape with never seemed to end. Definitely postcard material and no need to play candy crush between stops! The Golden Circle is an amazing experience which has significant importance to the history and culture of this sovereign country with the added bonus of geological wonders. The first stop to admire was the UNESCO-listed Þingvellir (Thingvellir) National Park. Þingvellir is the site of the world’s first parliament, Alþingi (Althing). The Icelandic parliament gathered at this location each year from 930AD to 1798. Wow! From a geological wonder Þingvellir is where the American and Eurasian tectonic plates meet. 72hrJetsetterGirl had an opportunity to walk between the plates along the Almannagjá fault. Another interesting fact is that Þingvellir has the largest natural lake in Iceland. As 72hrJetsetterGirl traveled along this natural beauty route, there was a photo opportunity to get up close and personal with the adorable pony size Icelandic horses. Who could resist! The ponies were brought to Iceland by Norse settlers in the 9th century and now Icelandic law prohibits any horses being imported into the country and once exported they are not allowed to return, resulting in a horse breed with very few diseases. Oh they are so cute! The iconic waterfall of Iceland, Gullfoss is the next attraction along this scenic route. When 72hrJetsetterGirl arrived at the waterfall, she thought that the Hvítá (White) river simply vanished into the earth as the edge is obscured from view. The force of the following water matched with the beauty of the untouched nature offers a spectacular view. Not only is Gullfoss, also known as “Golden Falls”, a wonder of mother nature, it also tells of a wonderful story. Thanks to Sigríður Tómasdóttir we can all enjoy this amazing waterfall today. The highlight of the day for 72hrJetsetterGirl was Haukadalur geothermal area. There is nothing like the aroma of sulphur in the air, the hissing noise of steam vents, the bubbling and gurgling noises of mud pools and the eruption of thermal water and steam shooting high into the air to activate one’s senses. The main attraction at Haukadalur is the mighty Strokkur geysir. Strokkur, meaning “churn”, is constantly boiling so there in no warning of its imminent eruption. On a positive note, as the force of nature erupts every 5-8 minutes there are ample opportunities for that perfect photo/video or just to be entertained by the water show over and over again. While doing research on things to do in Iceland, the “experiences” activities were certainly overwhelming – scuba driving between the tectonic plates sounded amazing but 72hrJetsetterGirl is not a fan of getting her hair when swimming, so will pass on this activity. One that got her attention was a “glacier walk”. As a native of a very dry and arid country, the opportunity to experience a glacier walk in her own backyard just did not exist. Once again 72hrJetsetterGirl was collected from her hotel and joined a very small tour group to explore Iceland’s South Coast which is on the Eurasian plate. As the travelers were getting to know each other on their way to Vik, 72HrJetsetterGirl learnt that she was the only one in the party doing a glacier walk. The others option for snowmobiling and felt disappointed as the weather forecast was not promising for that adventure. They now had “experience” envy! After traveling around 180kms in a southeast direction from Reykjavik, Vik was the first stop of the day. Vik is the southernmost village and warmest place in Iceland and home to one of the ten most beautiful beaches on Earth. The “famous” black basalt sand beach is one of the wettest places in Iceland. Vik is surrounded by cliffs and has a quaint red church on its rolling hills. The cliffs to the west of the beach are home to Puffins, which burrow into the shallow soils during nesting time. Out to sea lies stacks of basalt rock remnants of a previous extensive cliff-line. There is no landmass between Vik and Antarctica and hence the Atlantic ocean can attack with mighty force. 72hrJetsetterGirl was blown along the black sandy beach. Whilst wandering around Vik, (mind you there is only a gas station, a restaurant and a wool shop), 72hrJetsetterGirl notice trolls appearing at every opportunity. Just off the shore of Vik the basalt rock formation, Reynidrangar, sticks out in the Atlantic ocean like fingers. According to folklore, these spindly rock formation are actually trolls frozen in time as the trolls were trying to drag three ships ashore when they were caught in the sunlight and turned to stone! Before conquering the glacier, 72hrJetsetterGirl needed to be dressed for the occasion, swapping her high heels for crampons (metal spikes attached to soles of hiking shoes, so as not to slip on the glacier ice) and ice axe at the ready. The adventure was about to begin. Crampons on – Ready for an ice adventure! Under the guidance of a certified glacier guide 72hrJetsetterGirl was ready for her glacier adventure on the Sólheimajökull glacier. This glacier is the tongue that extends from the great Mýrdalsjökull glacier. During the fun and moderate level walk, the weather conditions changed dramatically on the glacier – one moment the sun is shining and 72hrJetsetterGirl wanted to ripped off her Eskimo outfit as she was getting quite warm and then five minutes later she is getting pelted by hail. The glacier guide in action on Sólheimajökull glacier. During the walk, 72hrJetsetterGirl explored the wonderland of ice sculptures, water cauldrons, and deep crevasses on the breathtaking Sólheimajökull glacier. The guide provided information, about the behavior of glaciers and their importance on nature. This was a unique experience for 72hrJetsetterGirl and yes, experiences are the new possessions. Ok, just like everyone else 72hrJetSetterGirl had read reviews about the Blue Lagoon, some saying it is a must whilst others say it is overrated. As 72hrJetsetterGirl had not experienced a hot thermal spring pool, then this was a new experience. Certainly a lovely way for 72hrJetsetterGirl to end her Icelandic Adventure. When making the initial airline booking; check that you have entered your personal details correctly. Most airlines will waive the change fee if they are notified immediately of the error. Moral of the story check personal details especially names once you have received the confirmation advice. The name “Golden Circle” is just a marketing term and has not connection to Icelandic History – who would have thought! Go those marketers. Good $$ deal – independent package included flights, accommodation, city tour and Northern Lights Boat cruise. 3 days / 4 nights (Departed the US Wednesday evening, arrived early Thursday morning into Reykjavik, Out Saturday pm). Express bus service from airport to hotel. The buses are literally at the ready lined up waiting for passengers (also equipped with Wi-Fi) – definitely no hanging around 30+ minutes for a shuttle service. 72hrJetsetterGirl’s accommodation was located outside of the City Centre and “tokens” were provided to use the local bus services. The bus service was fairly regular. As 72hrJetsetterGirl was traveling solo, day tours were the preferred option for sightseeing and hotel pickup was included and given the variance in the weather at the moment’s notice this was the best option for 72hrJetsetterGirl. On the final day, 72hrJetsetterGirl opted for the Blue Lagoon activity. Collected from the hotel in the morning, taken to the Blue Lagoon and then transported to the airport for flight back to the US. Storage of luggage at Blue Lagoon does incur a fee depending on the size of the suitcase. 72hrJetsetterGirl found the weather conditions in Iceland can change very quickly. One moment the sun is shining and it is quite warm than all of a sudden you are being pelted with small hail and gusty winds. Google best time to go to Iceland depending on what you are wanting to experience (Northern Lights, Glacier Walks, Snowmobiling etc.). We all know that Mother Nature does not always guarantee the perfect conditions for the chosen experience. Food is quite expensive in Iceland and eating out even at the cheaper options are costly compared to American prices. Depending on the time of year and if on a driving adventure, you may not see another car for miles and miles once outside of Reykjavik metro area. This is same for services (gas/food etc). The Icelandic tourism industry is one of the slickest around and they cater very much for the tourist. It made traveling a total breeze. Everyone speaks “impeccable” English. Iceland should be on everyone’s travel list – whether you are like 72hrJetsetterGirl who opted for organized day tours or want to create your own adventure of this stunning country. For photographers – this is a photographic paradise. Due to the increased number of travelers to this peaceful country, remember the basics of responsible travel – don’t litter, reduce your footprint and leave the place better than how you found it, so many more travelers can enjoy its beauty. The scenery is picturesque and better than words can describe! Experiences are the new possessions. Two Words: VISIT ICELAND!! Who else has had a travel booking mishap? Did it work out in the end?
2019-04-22T04:12:02Z
http://www.72hrjetsettergirl.com/category/europe/
The pagans run after all these things! Our gracious heavenly Father bids us to cast all our cares upon Him—assuring us that He cares for us! His EYE is ever upon us! His eye is a Father's eye, which is always quick, and always affects His heart. He has set His eyes upon us for good. His eye is ever over us—fixed immediately upon us! His EAR catches . . . It is always open to our cry. He listens to us—as one most tenderly and deeply interested in us. He knows our every need—and He intends to supply us! Our heavenly Father has forever determined—that none of His children shall lack any good thing—and that He will never withhold any good thing from them. Is the Christian guided aright through this wilderness world? It is by the wisdom of Christ! He has no wisdom of his own—and he is surrounded by snares and foes! He has within him a principle of evil, which invariably prompts him to leave the right road. He is prone to miss the mark, like a broken bow. He is attracted and affected by external worldly objects, which feed the lust of the flesh, the lust of the eye, and the pride of life; and but for divine wisdom guiding him—he would stray into the fatal paths of folly and crime! To guide him aright, requires an omniscient eye, a wise intellect, and loving heart; and Jesus possesses and exercises these for the good of His people. Is the Christian protected from the innumerable dangers and foes to which he is exposed? It is by the power of Christ! That power is his guard, and his defense. An almighty arm is placed beneath him—to uphold him. An almighty arm is lifted up—to defend him. He looks to it when foes assail him; he leans on it when his own strength fails him; and he trusts in it, in every hour of danger. Without the power of Jesus—he never could persevere; with it—he can never apostatize. It keeps him as a garrison keeps a town, as a shepherd keeps his flock, as a parent does his child. Is the Christian supplied? Are his needs anticipated and met? It is by the providence of Christ! Jesus rules over all worlds! He directs and controls all events! He keeps His eye and His heart upon His people! He is engaged to provide for them—and He sacredly keeps His engagement. ready to supply our needs. With Jesus for our provider—we are strengthened, supplied, and supported. O Jesus! what would we be without You? Exposed to the just wrath of Almighty God! thoughts are occupied with Him. Forget whom I may—I never forget Him. vigorous meditations on Jesus. I dwell at times on . . . until I am enamored with His beauty, and enraptured with His love! meditate on Him through the watches of the night. Jesus is the solace and joy of my soul. He fills me full of joy with His countenance. relieves, restores, and makes me happy. He is the river of pleasure—in which I sometimes bathe! He is the Eden of delights—in which I sometimes walk! Take away Jesus—and my soul droops, desponds, and dies! Give me Jesus—and the enjoyment of His presence, and I can do without any other heaven! He is the joy of my brightest days, and my solace in my dreariest nights! guide me in your truth and teach me." SIN is our daily burden. HOLINESS is our constant pursuit. The FEAR of God is placed as sentinel of the soul, to watch the approach of the enemy. GODLY SORROW is appointed the messenger to carry confessions, petitions, and desires to the throne of grace. ZEAL is armed with a sword to cut off the sinful right hand, or pluck out the sinning right eye. HOPE is placed on the watchtower to look out for the coming of the Lord—when sin shall expire in His presence, and holiness be perfected in the rays of His glory. FAITH is engaged to work for God and man, having . . . and love for its handmaid. PATIENCE is appointed to keep all quiet and calm within—let the burden ever so heavy, and the trial ever so severe. Patience will call submission and resignation into active employment, if fretfulness, murmuring, or dissatisfaction should attempt to stir. PEACE is placed as a garrison, to keep the heart and mind from anxiety, foreboding, and fault-finding with the Lord's dealings. JOY is directed to run backwards and forwards to the wells of salvation, to supply the soul with the reviving, invigorating, and strengthening waters of life! Thus evil is prevented, good is secured, God is glorified, Satan is foiled, and the soul is saved! It is not sufficiently realized, that the Bible has far, very far, more to say about this present life—than it has about the future one; that it makes known the secrets of temporal felicity—as well as everlasting bliss. In their zeal to tell men how to escape from hell and make sure of heaven—many evangelical preachers have had all too little to say upon our conduct on earth; and consequently, many who entertain no doubts whatever that they will inhabit a mansion in the Father's house—are not nearly so much concerned about their present walk and warfare as they should be; and even though they reach their desired haven, such slackness results in great loss to them now! The teaching of Holy Writ is the very reverse of the plan followed by many an "orthodox pulpit"! It not only gives much prominence to—but in Old and New Testament alike—its main emphasis is on our life in this world—giving instruction how we are to conduct ourselves here and now! The central thing which we wish to make clear in this article, and to impress upon the reader—is that God has established an inseparable connection between holiness—and happiness; between our pleasing of Him—and our enjoyment of His richest blessing; that since we are always the losers by sinning—so we are always the gainers by walking in the paths of righteousness; and that there will be an exact ratio between the measure in which we walk therein—and our enjoyment of "the peaceable fruit of righteousness" (Hebrews 12:11). However distasteful to the flesh, whatever sneers it may produce from carnal professors, the Christian must rigidly and perpetually act by the rule that God has given him to walk by. In so doing, he will be immeasurably the gainer; for the path of obedience—is the path of prosperity! It leaves earth for Heaven—or for Hell! God commands you to consider—and you cannot neglect to do so, but you sin. Your circumstances require it of you—and you cannot neglect to do so, but you must be losers thereby. God complains, "My people do not consider!" Inconsideration has ruined thousands—and will ruin thousands more. But shall it ruin you? It will—if you give way to it. Let me entreat you to do so no longer. Consider that you are immortal—and must live forever. Your BODY will die—and perhaps soon. But not so your SOUL—it never dies. Death changes its place—but not its nature. It leaves earth for Heaven—or for Hell! It lives as much when the body is dead—as it did before. It is conscious—and capable of enjoying the highest pleasures—or of enduring the most dreadful torments! And one or the other will be its lot. Where shall I be after death? Among whom shall I have my eternal portion? Is it rational to confine our attention to the present time, and the present world—when time bears no comparison to eternity; and our stay in this present world must be brief? Consider that you are a sinner. You have broken God's law. You have incurred God's displeasure. You are condemned by God's Word. Your heart is alienated from God. You act in opposition to God. You lie absolutely at God's mercy—and at any moment He could cut you down, and send you to Hell. You have no right to expect anything but justice at His hands! And if He dealt with you after your sins, and rewarded you according to your iniquities—your doom would be indescribably dreadful! Consider that you are immortal—and that you must live somewhere forever! Consider that you are a sinner—and that you cannot live in Heaven as such! Consider that you may be saved—for the Lord Jesus Christ is both able and willing to save sinners! Consider that you can only be saved by sincere faith in Christ! Consider that you must denounce your own righteousness, and rely solely on His finished work! Consider that if you are saved by Christ—you will live to Christ. He will be your Lord—as well as your Savior. He will be your example—as well as your atoning sacrifice! Consider that faith is the root of holiness, and a holy life alone proves our faith to be genuine. "This is what the Lord Almighty says: Consider your ways!" If a man had to wade breast deep through a thousand hells! Jesus is God's indescribable gift! Heaven itself is nothing, as compared with Him! If a man had to wade breast deep through a thousand hells to obtain Christ—it would be well worth the venture, if at the last he might but say, "My Beloved is mine—and I am His!" Jesus is so precious—that He cannot be matched! There is none like Him. The most lovely of the lovely—are vile and deformed, when compared with Him. As Rutherford would say, "Black sun, black moon, black stars—but, O bright, infinitely bright Lord Jesus!" If you ransacked time and space—eternity and immensity—you could find none that could even be compared unto Him—He is so precious! He is all that your souls can desire; yes, He Himself is all. You could not buy Christ in any market—if you gave the price of heaven and earth for Him. "How will you escape being condemned to hell!" punishment of His eternal wrath for his sins. and intensity are only fully understood in hell. There are two sides to a Christian's life: a light side—and a dark one; an elevating side—and a depressing one. His experience is neither all joy—nor all grief; but a mingling of both. It was so with the apostle Paul: "As sorrowful—yet always rejoicing" (2 Corinthians 6:10). When a person is regenerated, he is not immediately taken to heaven. Nor is sin then eradicated from his being, though its dominion over him is broken. It is indwelling corruption which casts its dark shadow over his joy! The varied experiences of the believer, are occasioned by Christ's presence—and sin's presence. If, on the one hand, it is blessedly true that Christ is with him all his days, even unto the end; on the other hand, it is solemnly true that sin indwells him all his days, even unto the end of his earthly history! Said Paul, "evil is present with me"; and that, not only occasionally—but sin "dwells in me" (Romans 7:20-21). Thus, as God's people feed upon the Lamb, it is "with bitter herbs that they eat it" (Exodus 12:8). —are among the unmistakable evidences that he is a regenerate person. For it is certain—that no one who is dead in trespasses and sins, realizes that there is a sea of iniquity within his heart, defiling his very thoughts and imagination; still less does he make conscience of the same and lament it! It is cause for fervent praise—if your eyes have been opened to see "the sinfulness of sin," and your heart to feel its obnoxiousness. Since it was not always thus, a great change has taken place—you have been made the subject of a miracle of grace! But the continuance of indwelling sin presents a sore and perplexing problem to the Christian. He is fully assured that nothing is too hard for the Lord. Why then, is evil allowed to remain present with him? Why is he not rid of this hideous thing—which he so much loathes and hates? Why should this horrible depravity be allowed to disturb his peace and mar his joy? Why does the God of all grace not rid him of this harassing tyrant? Let it be a settled principle again in our religion, that when a man's general conversation is ungodly—his heart is graceless and unconverted. Let us not give way to the foolish notion, that no one can know anything of the state of another's heart, and that although men are living wickedly—they have good hearts at the bottom. Such notions are flatly contradictory to our Lord's teaching. Is the general tone of a man's speech carnal, worldly, godless or profane? Then let us understand, that this is the state of his heart! When a man's tongue is extensively wrong, it is absurd, no less than unscriptural, to say that his heart is right! Let us notice Mary's POSITION. She was sitting at the feet of Jesus. Most probably He was reclining on the couch, and she went and took her place behind Him, where she could hear what He said, and occasionally get a glimpse of His face. It is the posture of HUMILITY—she took the lowest place. She had no wish to be seen, nor did she regard her own ease—she was intent on getting good from Jesus. It was the posture of ATTENTION—she wished to catch every word, and to understand all that the Lord was saying. If Jesus is teaching—then Mary will attend and listen. It was the posture of a LEARNER—she was a disciple of Jesus, therefore she sat down at His feet, that she may receive of His words. He need not now say unto her, "Learn of Me," for she was most anxious to do so. It was the posture of SATISFACTION—if she could but be within the sound of His voice, within the sight of His eye—it was enough for Mary. Anywhere with Jesus—would satisfy her! It was also the posture of REPOSE—here at the feet of Jesus, she found rest unto her soul. Her desires were satisfied, her love was gratified, her hungry soul was fed. It was enough. Being at the feet of Jesus was to her—a kind of earthly heaven. Mary sat at the feet of Jesus in a humble cottage. She now sits by His side in the heavenly mansion! Are you humble enough to take a seat at the feet of Jesus? Is it your delight to listen to His words? Are you like a little child desiring to learn of Him, and be taught by Him? Are you satisfied—if you can but get near to Jesus? Do you find sweet and refreshing repose in His presence? This promise which ensures us suitable and sufficient strength for all future days—is made by One who loves us dearly. Loves us—but who shall describe, who can suitably represent His love! It is Infinite love—and cannot be comprehended! It is Eternal love—and cannot be measured! It is Unchangeable love—and cannot be diverted from its objects! It is Sovereign love—and was fixed on them without anything in them to attract or draw it toward them! stronger than a husband's love. This Divine love is . . . a sun that will never set! But how many mistake wishes for needs! And while the Lord has promised to supply all His people's needs—He has nowhere promised to gratify all their wishes. "They shall not lack any good thing." That is, they shall not lack whatever is really good for them at the time—and under the circumstances. Whatever will promote their holiness and happiness—shall certainly be conveyed to them. First, the Lord is ABLE to supply them. "The earth is the Lord's, and the fullness thereof." He is able to do exceeding abundantly above all that we can ask or think! Second, the Lord DESIRES to supply them. "Like as a father pities his children, so the Lord pities those who fear Him. He knows our frame—He remembers that we are dust." Third, the Lord has PROMISED to supply them. "The Lord God is a sun and shield, the Lord will give grace and glory; no good thing will He withhold from those who walk uprightly." "My God shall supply all your needs, according to His riches in glory by Christ Jesus." Fourth, the Lord ALWAYS HAS supplied them. Look at Jacob, at David—and at all who have already arrived in glory. If the question, put by the Lord Jesus, to His disciples, when they returned from their missionary tour, on which He sent them without purse or bag, was put to them: "Have you lacked anything?" Every one of them would readily answer, "Nothing, Lord!" Aged believer, you and I can look back—and wonder how it is that we are where we are, and what we are: how we have held on, and held out until now. Here is the whole secret of the case—"But the Lord was my support!" —then the Lord was my support! and long ago I must have perished in my afflictions—or been a prey to my foes—but the Lord was my support! —but the Lord was my support! Scripture speaks of "the multitude of His loving kindnesses!" (Isaiah 63:7) And who is capable of numbering them? Said the Psalmist, "How excellent is your loving-kindness, O God!" (Psalm 36:7) No pen of man, no tongue of angel, can adequately express it. We read of God's "marvelous loving-kindness!" (Psalm 17:7) And surely it truly is! David prayed, "Display the wonders of Your loving-kindness!" Wondrous it truly is—that One so infinitely above us, so inconceivably glorious, so ineffably holy, should not only deign to notice such worms of the earth—but set His heart upon them, give His Son for them, send His Spirit to indwell them, and so bear with all their imperfections and waywardness as never to remove His loving-kindness from them! Swallowed up in a worldly church! Such is the testimony of the Lord Jesus. Real Christians have never been favorites of the world—and while it continues what it is, they never can be. This 'sect' originated with Jesus, the hated Nazarene, who came into the world for its good, and to save His people from their sins. He gathered around Him many—but they were principally the poor and unlearned. There was nothing in them, or about them, to recommend them to the proud and sensual world. They were begotten of God, and made new creatures in Christ. They embraced the truth that He taught. They observed the precepts that He gave. They copied the example that He set. that the Lord Jesus came into the world to take the sinner's place, fulfill the law in the sinner's stead, and die as the sinner's substitute. at such objects, they aimed. And yet, they were spoken against and despised, because they poured contempt on the luxuries, pride, and honors of this world. They were treated as the offscouring of all things, unfit for society, unfit to live. And yet, like Israel in Egypt, the more they were persecuted, the more they multiplied and grew; until at length they spread not only over the Roman empire—but nearly over the world. And, had they retained . . . they would no doubt have encircled the globe! But at length they were . . . and then their glory departed! so fell from their exalted station, and lost their real dignity. The 'sect' that had been spoken against everywhere, with the exception of a few—was swallowed up in a worldly church! There are still some, who, like the ancient sect of the Nazarenes, are spoken against everywhere. They will not swim with the stream. They will not compromise their Master's honor, or give up their Master's truth. According to the light they have—they walk; and they rejoice to exalt the Savior, humble the sinner, and proclaim salvation, all of grace. They rejoice that they are counted worthy to suffer shame, for His dear name. Reader! Do you belong to this sect? Is there anything in your religion that is distasteful to the world, anything that draws forth its opposition, or excites its contempt? The carnal mind is still enmity against God, and if we are godlike—that enmity will manifest itself against us! If we copy Christ's example, as set before us in the gospel; if we testify against the world, that its works are evil, and call upon it to repent, as Christ did—we shall soon be hated by the world! There is a way to hell—even from the very gates of heaven! Many are in this dangerous position. They are not far from the kingdom of God—but not actually in it. They have clear light in their heads—but have no grace in their hearts. They know the gospel in theory—but have no inward experience of its power. But no man can be saved by light in the mind; there must be the life of God within the soul. They have not only clear light—but correct morals. The tongue is controlled. The temper is governed. The life is regulated. But with all this, the soul is dead in trespasses and sins. There may be morality—without spirituality. The life may not only be correct—but there may be a regular attendance on gospel ordinances. They may come as God's people, sit as God's people, hear and sing as God's people—and yet not be in the kingdom of God! There may be no objection felt, or opposition shown to the doctrines or duties of the gospel. All may be admitted, professed, and even admired; but still the person may not be in the kingdom of God. There may also be a form of prayer—but prayer without faith—prayer without the heart, without the soul. They may be employed in teaching God's Word, either in the Sunday School, or in the pulpit—and yet not be in the kingdom of God. O how solemn is the thought, how searching is the fact—that people . . . may employ their time and talents in instructing others in the things of God—and yet never enter into the kingdom of God themselves! Many will come very near to the kingdom—but will never enter it. As John Bunyan says, "There is a way to hell—even from the very gates of heaven!" But it must be dreadful to come near, so near to heaven—and yet to be thrust down to hell! 1. The Lord's people are often found in the most unlikely places! Who would have expected to find God's chosen people—a multitude of them—in a place so foul, so polluted, so degraded—as Corinth? 2. The Lord chooses the most unlikely people! Who would ever have thought that the Lord would have chosen: the sexually immoral, idolaters, adulterers, male prostitutes, homosexuals, thieves, the greedy, drunkards, slanderers, swindlers—to be saved? But He did! This implies exposure to foes—Satan, evil men, and death—against these we need defense. It implies opposition—and the opposition of our foes is great, daring, and deadly. It implies danger to be apprehended—because we are weak, timid, and unskillful—and our foes are strong, daring, and experienced. Our safety therefore, stands in what the Lord is to us—He is our shield, and such a shield as no one besides has, or can have. He is omniscient to see all our foes and dangers. He is omnipresent to help us at all times, and against all opposers. He is omnipotent to defend us, and secure us from all evil. He is faithful to fulfill His word, and carry out His engagements. He will come between the believer and danger. He will preserve the trusting soul from all real injury. He will protect the upright in heart everywhere, and at all times. What a mercy! What an unspeakable privilege is this! "You who fear the Lord—trust in the Lord! He is their help and shield." Psalm 115:11. O for grace to trust the Lord with all, to trust the Lord for all, and to trust the Lord under all! Holy Spirit, strip us of all confidence in the flesh, of all reliance on man, and of all trust in circumstances; and bring us by Your divine and holy teaching—to trust in the Lord alone! No lost sinner, while carnal, while minding the things of the flesh—can ever please God. He cannot . . . at any season—either in life or death. Man is totally depraved. He is wholly fallen. The whole head is sick, the whole heart is faint. The leprosy cleaves to him, has spread over him, and dried up all the moral and vital moisture of the soul. The man is therefore lost, wholly lost, eternally lost—unless God interposes for his rescue! All that he does while he is so—is displeasing to God. He has no faith, and "without faith it is impossible to please God." In all his prayers, tears, alms-deeds, and other good works—there is something that is displeasing to God. It is like the offering of Cain; for the person must be reconciled to God—before the sacrifice can be accepted by God. Until then he cannot please God, for he cannot set his heart to do it. He may try—but the innate disposition of the heart while carnal, will be too strong for him, and will lead him to break through all his vows, promises, and resolutions. He may change his conduct—but he cannot change his heart; for its depravity has become natural to it. He cannot do what God requires—as God requires it. If what he does is externally good, it is internally bad. The motive prompting, and the end aimed at—are alike evil, for SELF is always the carnal man's god. This is Paul's criterion. No matter what a man has, if he does not have the Spirit of Christ—"he does not belong to Christ!" This divine agent, as the Spirit of Christ—always convinces the soul of its need of Christ. He always leads to the cross of Christ! He will not allow the soul to stop at, or rest in, sacraments, ceremonies, or any duties it may perform. He points to the cross. He leads to the cross. He fixes the sinner's eye upon the cross. He brings peace to the soul through the cross. He dedicates and devotes the sinner to God's service at the cross. Every one who has the Spirit of Christ—knows something of the worth, virtue, and efficacy of the cross of Christ. The Spirit of Christ—always conforms to the image of Christ. Christ is the model after which the Spirit works; and by the Word and ordinances, by providence and His own inward operations—He stamps the likeness of Christ upon the soul. He fixes the eye on Jesus, who, as a mirror, represents and sets forth the glory of God; and by looking at Jesus—a divine transformation takes place—and we are changed into the same image, from glory to glory, even as by the Spirit of the Lord. Unless, therefore, we have been taught our need of Christ as a Savior; unless we have been led to the cross of Christ to seek salvation there; unless we are in some degree conformed to Christ, and are daily seeking more conformity—we have not the Spirit of Christ. "And if anyone does not have the Spirit of Christ—he does not belong to Christ." The Spirit of Christ—is the great proof that we are Christ's. There may be much feeling, a moral reformation, and a profession of religion—without this. But if we have the Spirit of Christ, our thoughts will be engaged with Him, our hearts will be going out to Him, and we shall at times long to depart, that we may be with Him, and see Him as He is! The Spirit of Christ always renders Christ precious—and produces the highest possible esteem of Him. The Spirit of Christ always makes its possessor like Christ. Not perfectly, here on earth—yet He kindles and keeps alive a desire for perfect likeness. This is the great, the grand, the habitual aim of the soul, always and everywhere—to be like Christ! temper, and manifested the same morose spirit. Very few are well satisfied with the Lord's plans. Fewer still are always pleased with the Lord's works. How many quarrel with His sovereignty! What hard things have been spoken against it! unwise, unkind, and almost unjust! "Have you any right to be angry?" Paul compares present sufferings—with future glory. Believers are exposed to all kinds of suffering, and instead of obtaining an exemption from afflictions—they are assured that it is through much tribulation that they must enter into the kingdom of God. Some endure inward suffering, with which no one is fully acquainted but God Himself. They have such darkness, gloom, distress, agitation, trouble, and sorrow—as would not be easy to describe. Some suffer much in body, from the stressed and disordered state of the nervous system, from chronic diseases, or deformities in the physical frame. They seldom move without suffering, and for years together have but little freedom from weakness and pain. They live a life of suffering, a kind of dying life—and think much of heaven as of a place where there is no more pain. Some suffer much financially; scarcely anything seems to prosper with them; losses, crosses, and opposition meet them at every turn; and though they live honestly, and conduct their business honorably—they are thwarted, hindered, and filled with perplexity. No one can tell what they suffer from financial trials and difficulties. Others suffer from reproach, misrepresentation, strife, and persecution in the world, or in the Church—or both! No one seems to understand them, or is prepared to sympathize with them; they are like "a sparrow alone upon the house-top." False friends and open enemies unite to trouble and distress them, so that they often sigh, and say, "O that I had wings like a dove, for then would I fly away and be at rest!" Others in the domestic circle, or from some of the relationships of life—are called to suffer long and seriously. But whether from trouble of mind, sickness of body, trials in business, family difficulties, or persecution for Christ's sake—all suffer, and most believers suffer much! Glory which will exclude all pain and suffering, all sin and sorrow! Glory beyond the reach of all foes and the cause of all trouble! Glory which includes happiness—perfect, perpetual, never-ending happiness! Glory which includes honor—the highest, holiest, and most satisfying honor! Glory, or splendor—which will fill the soul, clothe the body, and dignify the entire person forever! Filled with light, peace, and joy; clothed with beauty, brightness, and magnificence—they will appear with Christ in glory—filling them with wonder and unutterable delight! No more disease, no more weakness, no more pain! We shall soon have 'glorious liberty'! free from every burden that bows it down. The BODY will be gloriously free! It will be a glorious body—like the body of our Lord and Savior Jesus Christ. But health, strength, and ease will characterize it forever! free from all internal, external, and eternal evil. It will be freedom crowned with glory—with . . . Eye has never seen, ear has never heard, nor has the heart of man ever conceived of anything so grand, so magnificent, so glorious—as what God has provided, and has in store for His people! Salvation includes . . . and our glorification—which is future. while separation from the world and dedication to God—prove that we are saved. Not in the same sense as we are saved by faith—which delivers us from guilt, degradation, and eternal death—by receiving from Christ, and confiding in Christ. To be saved by hope—is to be kept, preserved, upheld, or sustained, in the midst of foes, dangers, and trials. Hope quickens us in duties—and preserves us from becoming cold and dead. It comforts us in tribulations—and keeps us from being disheartened and gloomy. It enables us to overcome temptation—and so to hold on our way, looking unto Jesus. It gives us peace in death—in the sure prospect of victory over the grave. by protecting us against apostasy—into which we can never fall so long as we hope in God. From many evils, at many times, in many ways—we are saved by hope! Hope is in God—as its highest object and best end. Hope is through Christ—who is the way to the Father, the truth, and the life. Hope is on the ground of the Word, which warrants, excites, and regulates it. Hope is for all that God has promised, whether temporal or spiritual, in this world or the next. Hope should be encouraged—as it brings . . . honor to our Lord Jesus Christ. Holy Spirit, fill us with a lively hope, and teach us to expect . . . all that You have revealed in Your most holy Word. We owe everything to grace! children of grace—and consequently heirs of heaven! We owe everything to grace—free grace, sovereign grace! Our heavenly Father requires us . . . to submit to Him without murmuring or complaining. I myself will help you! Wherever the Lord leads us—He will support us; nor shall the difficulties of the way, or the weakness we feel—be too much for us. His hand is stretched out to us, and it is for faith to lay hold of it and proceed, confident of divine assistance. His omnipotent arm is the protection of His people in danger—and the strength of His people in weakness. He is "an ever-present help in times of trouble". He is a God at hand. Are you weak, or in difficulty? Plead His Word; it is plain, positive, and sure. He cannot lie. He will not deceive. His strength is made perfect, and is glorified in your weakness. Fear not, underneath you are His everlasting arms! He CAN help—for He is omnipotent. He WILL help—for He has given you His Word. "Trust in the Lord at all times; yes, trust in the Lord forever, for in the Lord Jehovah is everlasting strength!" That strength is promised to you, and will be employed for you in answer to prayer. Why then are you so fearful? Why cast down? He says, "I myself will help you!" —he chose an extraordinary subject. he was well versed in tradition. There were . . . few congregations that he could not interest. But he made the conversion of sinners the object of his life—and he chose Christ crucified to be the subject of his ministry! No matter where he went—he took his subject with him. No matter whom he addressed—he directed their attention to this point. Paul's subject then, was Christ Crucified! Paul CHOSE this subject—and he had good reasons for doing so! for it is the center where . . . It is the theater where God . . . It is the instrument by which . . . It is an object which . . . furnishes matter for endless praise! Second, it is the most honored subject. It tunes the harps of heaven. It fills the sweetest songs on earth. It is that by which the Holy Spirit works . . . in the establishment of the church of God. By the preaching of Christ crucified . . . the temples of the heathen were transformed into houses of prayer. By the preaching of the cross . . . millions are snatched from Hell! Third, it is a subject that is intensely hated! Devils hate it, and try to prevent its publication. Erroneous men hate it, and try to substitute something of their own for it. And just in proportion as men are influenced by the prince of darkness, or yield to the pride of their own fallen natures—will they hate the doctrine of the cross! all poor perishing sinners need it! The more we know of God's nature and government—the more we see of man's natural state and condition. And the more we feel of our own weakness and depravity—the more shall we prize and value the doctrine of the cross! Christ, and Him crucified shall be . . . the foundation of my everlasting hope! O my soul, look to Jesus—as crucified for your sins! Think of Jesus—as dying in your stead! Speak of Jesus—as full of grace and love! Christians! WHAT do we preach? We are ALL preachers—and we preach daily! But do we preach Christ? Do we speak of Him with our tongues? Do we write of Him with our pens? Do we honor Him with our lives? Is Christ and His glory—the grand end and aim of our life? Is it out of love to Him? Is it that we may do good to souls? Is it that we may please God? Christ crucified should be preached by every Christian. Christ crucified should be preached in all companies. Christ crucified should be preached every day. and we must daily meditate on Christ crucified! May Christ and His cross be all my theme! May Christ and His cross be all my hope! May Christ and His cross be all my joy! Cross of Jesus! Jesus crucified! To you would I look in life—and all its troubles! To you would I look in death—and all its pangs! To you would I look in glory—when filled with all its joys! "God forbid, that I should glory, except in the cross of my Lord Jesus Christ!"
2019-04-21T02:55:35Z
https://gracegems.org/2010/08/08.htm
Over 300 million people in the United States make decisions about travel every day with about three-quarters of the vehicle miles traveled (VMT) on the Nation’s roadways for purposes of personal travel. The household travel data cited below are drawn primarily from a sampling of Americans’ daily travel habits collected in the National Household Travel Survey (NHTS). Travel to and from work accounted for 26.7 percent of household-based vehicle travel in 2009, compared with 33.7 percent in 1969; the share of trips devoted to personal visits and recreation also declined. The share of trips attributed to shopping and errands grew significantly over this period from 17.7 percent to 30.7 percent. These trips had widely different destinations than work trips and occurred at different times of day. Recent data on work commute trends show an increase in telecommuting and flexible hours in the U.S. workplace. More than 36 percent of full-time workers can set or change their start time. The data show that workers are increasingly linking commuting with trips for non-work activities such as errands and shopping. These non-work trips have the potential to conflict with work commute trips and extend the a.m., p.m., and midday peak travel periods as well. Weekend travel for errands and recreation is also increasing. While congestion used to be associated only with peak travel hours, the increasing share of trips unrelated to work presents a challenge for the operational performance of the transportation system at other times as well. Travel to work has historically defined peak hour travel demand and in turn influenced the design of transportation infrastructure. Work trips are a critical factor to transit planning and help to determine corridors served and assess the level of transit services available. The average automobile commuter spends 22.8 minutes commuting a one-way distance of 12.6 miles; bus commuters travel a shorter average distance of 9.4 miles, but have a higher average commuting time of 48.9 minutes. Socio-demographic changes in the United States are expected to impact travel patterns in coming years. First, while older drivers tend to reduce their daily travel relative to when they were younger, these older drivers are expected to constitute a significantly higher share of total national travel in the future as the baby boom generation ages. Second, 18 million of 150 million U.S. households are made up of new immigrants who tend to have a larger number of persons per household, a greater number of daily household trips, and less likelihood of owning a vehicle; increased immigration can have implications such as increased carpooling, walking, biking, and use of public transit. Third, population redistribution within the United States, such as shifts from the Northeast and Midwest to the Southern and Western States, has the potential to overwhelm the transportation systems in some of these redistributed areas. In 2008, a network of 4.1 million miles of public roads provided mobility for the American people. Rural areas accounted for 73.4 percent of this mileage. While urban mileage constitutes only 26.6 percent of total mileage, these roads carried 60.1 percent of the almost 3.0 trillion vehicle miles traveled (VMT) in the United States in 2008. Urban areas are defined to include all places with a population of 5,000 or greater; all other locations are classified as rural. In 2009, 25.9 percent of the Nation’s 603,310 bridges were located in urban areas; these bridges carried 76.3 percent of total bridge traffic and included 55.9 percent of the total bridge deck area. Roadways functionally classified as rural local made up 50.2 percent of total mileage in 2008, but carried only 4.4 percent of total VMT. In contrast, the urban portion of the Interstate System made up only 0.4 percent of total mileage but carried 15.2 percent of total VMT. Highway mileage increased at an average annual rate of 0.3 percent between 2000 and 2008, while VMT grew at an average annual rate of 1.0 percent. In 2008, 77.4 percent of highway miles were locally owned, 19.3 percent were owned by States, and 3.2 percent were owned by the Federal government. Bridge ownership is more evenly split; in 2009, 50.2 percent of bridges were locally owned, while 48.1 percent were owned by States. The term “Federal-aid highways” applies to the subset of the road network that is generally eligible for Federal funding assistance under most programs; this includes all functional systems except for rural minor collector, rural local, and urban local. (Certain programs have broader eligibility criteria that allow funds to be used for any type of road). Federal-aid highways represent 24.5 percent of total mileage and carry 84.7 percent of total VMT. The 162,944-mile National Highway System (NHS) includes the Nation’s key corridors and carries much of its traffic. In 2008, NHS included only 4.0 percent of the Nation’s total route mileage and only 6.7 percent of the Nation’s total lane miles, but 44.3 percent of VMT in the Nation were on the NHS. Of the total bridges in the Nation, only 19.5 percent are on the NHS; but these bridges comprise 49.2 percent of the total bridge deck area of the Nation. All of the Interstate System is part of the NHS, as are 83.5 percent of rural other principal arterials, 87.1 percent of urban other freeways and expressways, and 36.3 percent of urban other principal arterials. Transit system coverage, capacity, and use in the United States continued to increase between 2006 and 2008. In 2008, there were 690 agencies (667 public agencies) in urbanized areas required to submit data to the National Transit Database (NTD). All but 166 of these agencies operated more than one mode. There were also 1,396 rural transit operators that reported. Urban reporters operated 658 motor bus systems, 633 demand response systems, 16 heavy rail systems, 29 commuter rail systems, and 35 light rail systems. There were also 67 transit vanpool systems, 20 ferryboat systems, 7 trolleybus systems, 4 automated guideway systems, 4 inclined plane systems, and 1 cable car system. Not all transit providers are included in these counts since those that do not receive grant funds from the Federal Transit Administration (FTA) are not required to report to the NTD. These systems operated 73,512 motor buses, 29,833 vans, 11,367 heavy rail vehicles, 6,124 commuter rail cars, and 1,919 light rail cars. Transit providers operated 11,864 miles of track and served 3,078 stations. Light rail systems have been growing fastest since 2006, with track mileage up 5.1 percent and the number of stations served up 3.0 percent. Nonetheless, the Nation’s rail system mileage is still dominated (62 percent) by commuter rail. Trends in directional route miles follow growth in track mileage and allow for comparison with nonrail modes. In 2008, transit services provided 10.2 billion unlinked trips and 53.7 billion passenger miles traveled (PMT). Heavy rail and motor bus modes continue to be the largest segments of both measures. Commuter rail supports relatively more PMT due to its greater average trip length (23.4 miles compared with 3.9 for bus, 4.8 for heavy rail, and 4.4 for light rail). Light rail is the fastest-growing rail mode (with PMT growing at 5.7 percent per year between 2000 and 2008) but still provides only 3.9 percent of transit PMT in 2008. Vanpool growth during that period was 11.8 percent per year, substantially outpacing the 1.8 percent growth in motor bus passenger miles, but while motor buses provided 39.5 percent of all PMT, vanpools accounted for only 1.8 percent. Rural transit operators reported 136.6 million unlinked passenger trips on 486 million vehicle revenue miles. This included 61 Indian tribes who provided 417,000 unlinked passenger trips. Rural systems provide both traditional fixed-route and demand response services, with 1,150 demand response systems, 494 motor bus systems, and 16 vanpool systems. A total of 304 urbanized area agencies also reported providing rural service at the rate of 24 million unlinked passenger trips on 37 million vehicle revenue miles in 2008. Every state reported providing rural service. Poor pavement condition imposes economic costs on highway users in the form of increased wear and tear on vehicle suspensions and tires, delays associated with vehicles slowing to avoid potholes, and crashes resulting from unexpected changes in surface conditions. While transportation agencies consider many factors when assessing the overall condition of highways and bridges, surface roughness most directly affects the ride quality experienced by drivers. On the NHS, the percentage of VMT on pavements with good ride quality has risen sharply over time, from approximately 48 percent in 2000 to about 57 percent in 2008. (These calendar year values are identified as fiscal year 2001 and 2009 values in some other U.S. DOT publications.) The VMT on NHS pavements meeting the acceptable standard of ride quality increased from 91 percent in 2000 to 92 percent in 2008. Rural NHS routes tend to have better pavement conditions than urban NHS routes. In 2008, for example, about 97.5 percent of all VMT on rural pavements was traveled on routes with acceptable ride quality. By contrast, the portion of urban NHS VMT on acceptable pavements was 89.0 percent that same year. For Federal-aid highways as a whole, including the NHS and other arterials and collectors eligible for Federal funding, the VMT on pavements with good ride quality increased from 42.8 percent in 2000 to 46.4 percent in 2008. The VMT on pavements meeting the less stringent standard of acceptable ride quality declined slightly from 85.5 percent in 2000 to 85.4 percent in 2008. Two terms used to summarize bridge deficiencies are “structurally deficient” and “functionally obsolete.” Structural deficiencies are characterized by deteriorated conditions of significant bridge elements and potentially reduced load-carrying capacity. A “structurally deficient” designation does not imply that a bridge is unsafe, but such bridges typically require significant maintenance and repair to remain in service, and would eventually require major rehabilitation or replacement to address the underlying deficiency. A bridge is considered “functionally obsolete” when it does not meet current design standards (for criteria such as lane width), either because the volume of traffic carried by the bridge exceeds the level anticipated when the bridge was constructed and/or the relevant design standards have been revised. Addressing functional deficiencies may require the widening or replacement of the structure. Rural bridges tend to have a higher percentage of structural deficiencies, while urban bridges have a higher incidence of functional obsolescence due to rising traffic volumes. The share of total bridges classified as deficient (meaning the share of bridges classified as either structurally deficient or functionally obsolete) fell from 30.1 percent in 2001 to 26.5 percent in 2009. The share of NHS bridges classified as deficient fell from 23.3 percent in 2001 to 21.9 percent in 2009; this reduction was split evenly between structurally deficient and functionally obsolete bridges. This edition of the C&P report discusses levels of investment needed to achieve a “state of good repair” benchmark. The Federal Transit Administration (FTA) uses a numerical condition rating scale ranging from 1 to 5 (detailed in Chapter 3) to describe the relative condition of transit assets as estimated by the Transit Economic Requirements Model (TERM). Assets are considered to be in a state of good repair when the physical condition of that asset is at or above a condition rating value of 2.5 (the mid-point of the marginal range). An entire transit system is in a state of good repair when all its assets are rated at or above the 2.5 threshold rating. This report estimates the cost of replacing all assets in the national inventory that are past their useful life (that is, below the 2.5 condition rating) to be a total of $78 billion. This is 12 percent of the estimated total asset value of $663.3 billion for the entire U.S. transit industry. The cost-weighted average condition rating over all bus types is near the bottom of the adequate range (3.18) where it has been without appreciable change for the past decade. Average age is up slightly in all categories (except vans) as is the percentage of vehicles that is below the state of good repair replacement threshold. This is in spite of the fact that new vehicles have entered the fleet faster than at any time in the past decade. The number of vehicles reported is up 17 percent over the last 2 years. This is particularly evident with articulated buses (extra-long buses with two connected passenger compartments), which have grown in number by 25 percent. The average age of the bus fleet is now 6.2 years. The cost-weighted average condition rating over all rail vehicles is near the middle of the adequate range (3.47) where it has been without appreciable change for the past decade. With average conditions and ages being quite stable over the last 5 years, the most significant aspect of the rail vehicle data presented here is the recent growth in the size of the fleet, which increased by 16 percent, both in total and for each of the individual modes, between 2006 and 2008. This is the largest increase observed over the past decade by far. Non-vehicle transit rail assets represent the biggest challenge to achieving a state of good repair. The replacement value of guideway elements (track, ties, switches, ballast, tunnels, and elevated structures) is $143.6 billion, of which $19.1 billion is in poor condition (13 percent) and $15.8 billion is in marginal condition. The replacement value of train systems (power, communication, and train control equipment) is $92.0 billion, of which $13.7 billion is in poor condition (15 percent) and $18.9 billion is in marginal condition. The relatively large proportion of guideway and systems assets that are in poor condition, and the magnitude of the $38.2 billion investment required to replace them, represents a major challenge to the rail transit industry. Drivers continue to experience high levels of congestion on the Nation’s highways, leading to travel delays, wasted fuel, and billions of dollars in congestion costs. From an economic perspective, travel time accounts for almost half of all costs experienced by highway users (other key components of user costs include vehicle operating costs and costs associated with crashes). Three key aspects of congestion are severity, extent, and duration. Severity refers to the magnitude of the problem at its worst. The extent of congestion is the geographic area or number of people affected. Duration of congestion is the length of time that the traffic is congested, often referred to as the “peak period.” Since there is no universally accepted definition of exactly what constitutes a congestion “problem,” this report uses several metrics to explore different aspects of congestion. The Texas Transportation Institute (TTI) collects data for 458 urban communities of different sizes across the Nation. The TTI 2009 Urban Mobility Report estimates that drivers experienced nearly 4.2 billion hours of delay and wasted approximately 2.8 billion gallons of fuel in 2007. The total congestion cost for these areas (including the implicit value that travelers place on their lost time) was $87.2 billion. The Travel Time Index measures the amount of additional time required to make a trip during the congested peak travel period. The average value for all urbanized areas was 1.24 in 2008, indicating that a trip during the peak period would require 24 percent longer than the same trip during off-peak noncongested conditions. For example, a trip of 60 minutes during the off-peak time would require 74.4 minutes during the peak period. The average Travel Time Index for all urbanized areas had begun to decline in recent years, dropping below its 2000 level of 1.25. This reduction occurred primarily in areas with a population of 1 million or greater. Smaller urbanized areas did not experience the same degree of reduced congestion based on the Travel Time Index or other measures. The average daily percentage of VMT under congested conditions is a metric that indicates the portion of daily traffic on freeways and other principal arterials in an urbanized area that moves at less than free-flow speeds. After increasing from 27.0 percent to 28.6 percent in 2004, this percentage dropped to 26.3 percent in 2008. This decrease can partially be attributed to the reduction in VMT that occurred between 2006 and 2008. There are different ways in which congestion can be measured. The CEOs for Cities “Driven Apart” report suggests an alternative approach to the TTI methodology. This report is available at: http://www.ceosforcities.org/driven-apart. A variety of strategies can contribute to reducing congestion. These include the strategic addition of new capacity, increasing the productivity of existing capacity via systems management and operations, providing transportation alternatives along congested corridors, and travel demand management through approaches such as congestion pricing. Transit operational performance can be measured and evaluated using a number of different factors, including the speed of passenger travel, vehicle utilization, and service frequency. Average operating speed in 2008 remained consistent with 2006 levels at 19.5 miles per hour across all transit modes. Average operating speed is an approximate measure of the speed experienced by transit riders and is affected by dwell times and the number of stops. The average speed of nonrail modes was 13.7 miles per hour in 2008, the same as was reported in 2000. Rail mode operating speeds have decreased from 24.9 miles per hour in 2000 to 23.9 miles per hour in 2008. Average vehicle occupancy levels did not change significantly between 2000 and 2008. The most significant changes over that period were a 7.5 percent increase for heavy rail and a 7.6 percent decrease for light rail. Light rail decreases may be due to the addition of new capacity in that mode over this period. Several urbanized areas, including Denver, Phoenix, Seattle, Charlotte, and Salt Lake City, opened new light systems during this period of time. The nonrail modes were practically unchanged. Adjusting for the number of seats on an average vehicle for each mode, it can be seen that, as expected, vanpool and heavy rail vehicles, on the average, run closer to capacity than other modes. Between 2000 and 2008, transit agencies have provided substantially more vanpool, demand response, and light rail service. These modes have far outpaced motor bus, with its 1.3 percent per year growth rate in revenue miles, and heavy rail with its 1.6 percent growth rate. Vanpool, growing at almost 12.3 percent per year, is set to become a major mode. Demand response is starting to account for a great number of service miles, though with an average of only 1.2 passengers, it is still a small contributor to the total number of passenger trips. Productivity per active vehicle increased between 2000 and 2008. Vehicle in-service mileage has increased steadily from 2000 to 2008 for all the major modes. Light rail has shown particularly strong growth, though from a low starting point. Demand response has also shown a strong improvement in vehicle miles per active vehicle. From 2000 to 2008, the number of fatalities on urban roadways decreased by about 1 percent from 16,113 to 15,983. During this same period, fatalities on rural roads decreased by almost 16 percent from 24,838 to 20,905. Urban Interstate highways were the safest functional system, with a fatality rate of 0.47 per 100 million VMT in 2008. Although the fatality rate on rural local roads declined from 3.45 to 3.08 per 100 million VMT from 2000 to 2008, this functional system continues to have the highest fatality rate. Approximately 53 percent of highway fatalities in 2008 involved a roadway departure, in which a vehicle left its travel lane and crashed. While roadway design and environmental factors play a role in these types of crashes, behavioral factors such as driver intoxication, driver fatigue, driver drowsiness, and driver distraction also have a significant impact. Some roadway departures can be attributed to drivers being distracted while attempting to operate mobile devices. The U.S. DOT is leading efforts to help educate drivers and promote a greater understanding of the issue. In 2008, approximately 21 percent of highway fatalities occurred at intersections. Of these fatalities, about 61 percent occurred in urban areas. Older drivers and pedestrians are particularly at risk at intersections. About 40 percent of the fatal crashes for drivers aged 80 or older and about one-third of the pedestrian deaths among people aged 70 or older occurred at intersections. Other major crash types involve speeding and alcohol-related incidents. Speeding was a contributing factor in 31 percent of fatal crashes with 11,674 lives lost. Alcohol-related crashes continue to be a serious public safety problem that accounted for 13,846 deaths and 41 percent of fatal crashes in 2008. In terms of vehicle type, the number of occupant fatalities that involved passenger cars decreased from 20,699 in 2000 to 14,587 in 2008. Fatalities for occupants of light trucks and large trucks also declined, while motorcycle fatalities grew by almost 83 percent over this period from 2,897 in 2000 to 5,290 in 2008. The overall number of traffic-related injuries has decreased over time, from about 3.1 million in 2000 to about 2.3 million in 2008. In 2000, the injury rate was 116 per 100 million VMT; by 2008, the number had dropped to 80 per 100 million VMT. Public transit in the United States has been and continues to be a highly safe mode of transportation, as evidenced by the statistics on incidents, injuries, and fatalities that have been reported by transit agencies for the vehicles they operate directly. Reportable safety incidents include collisions and any other type of occurrence that results in death, a reportable injury, or property damage in excess of a threshold. Since 2002, an injury has been reported only when a person has been immediately transported away from the scene of a transit incident for medical care. Any event producing a reported injury is also reported as an incident. Injuries and fatalities include those suffered by riders as well as by pedestrians, bicyclists, and people in other vehicles. Reportable security incidents include a number of serious crimes (robberies, aggravated assaults, etc.), as well as arrests and citations for minor offenses (fare evasions, trespassings, other assaults, etc.). Injuries and fatalities may occur not only while traveling on a transit vehicle, but also while boarding, alighting, or waiting for a transit vehicle or as a result of a collision with a transit vehicle or on transit property. The definition of transit-related fatalities has remained the same. Non-homicide/non-suicide fatalities decreased from 245 in 2000 to 216 in 2008, and dropped from 0.56 per 100 million passenger miles traveled (PMT) in 2000 to 0.42 per 100 million PMT in 2008. Both the fatalities for 2008 and the rate per 100 million passenger miles demonstrate that transit is an extremely safe mode of transportation. With the fatality count steadily trending down since 2002, it experienced an unexplained increase of 30 deaths in 2007. Data on incidents (safety and security combined) and injuries per 100 million PMT for transportation services on the five largest modes from 2004 to 2008 (excluding suicides and homicides) suggests that the highway modes (motor bus and demand response) became significantly safer in 2007 and 2008; however, given this dramatic decrease is unexplained, the data for these years may also suggest a reporting inconsistency. Data for the rail modes is volatile, but does not suggest any significant positive or negative trends over this period. Although commuter rail has a very low number of incidents per PMT, commuter rail incidents are far more likely to result in a fatality than incidents occurring on any other mode. Most likely, this is because the average speed of commuter rail vehicles is considerably higher than the other rail modes (except vanpools). Motor buses, on the other hand, have a high number of incidents per PMT, but a lower chance of having an incident result in a fatality than almost any other mode (perhaps related to their low average speed). Cash outlays by the Federal government for highway-related purposes were $40.0 billion (22.0 percent of the combined total), including both direct highway expenditures and amounts transferred to State and local governments for use on highways. States provided $90.6 billion (49.7 percent). Counties, cities, and other local government entities funded $51.5 billion (28.3 percent). Of the total $182.1 billion spent for highways in 2008, $91.1 billion (50.1 percent) was used for capital investment. Spending on routine maintenance and traffic services totaled $44.9 billion (24.7 percent); administrative costs (including planning and research) were $14.7 billion; $14.6 billion was spent on highway patrol functions and safety programs; $8.5 billion was used to pay interest; and $8.2 billion was used for bond retirement. Total highway expenditures by all levels of government increased by 48.4 percent between 2000 and 2008. Local government spending grew more quickly than Federal or State spending over this period; the share of total expenditures funded by the Federal government declined from 22.4 percent in 2000 to 22.0 in 2008. Federal cash expenditures for capital purposes outlay grew by 48.6 percent, from $26.1 billion in 2000 to $37.8 billion in 2008, while combined State and local capital investment increased by 51.5 percent. Consequently, the Federally-funded share of total capital outlay declined over this period (from 42.6 percent to 41.5 percent). Of the total $82.7 billion of capital spending by all levels of government in 2008, $46.6 billion (51.1 percent) was used for system rehabilitation (resurfacing or replacing existing pavements and rehabilitating or replacing existing bridges). An estimated $33.6 billion (36.8 percent) was used for system expansion (constructing new roads and bridges or adding lanes to existing roads); and $11.0 billion (9.0 percent) went for system enhancements such as safety, operational, or environmental enhancements. In 2008, $94.2 billion (48.9 percent) of the revenue generated for spending on highways and bridges came from highway-user charges—including motor-fuel taxes, motor-vehicle fees, and tolls. Other major sources of revenues for highways included general fund appropriations of $40.4 billion (21.0 percent) and bond proceeds of $19.9 billion (10.3 percent). All other sources such as property taxes, other taxes and fees, lottery proceeds, interest income, and miscellaneous receipts totaled $38.2 billion (19.8 percent). In 2008, $52.5 billion was generated from all sources to finance transit investment and operations. Transit funding comes from public funds allocated by Federal, State, and local governments and system-generated revenues earned by transit agencies from the provision of transit services. Of the funds generated in 2008, 73.9 percent ($38.8 billion) came from public sources and 26.1 percent came from passenger fares ($11.4 billion) and other system-generated revenue sources ($2.3 billion). The Federal share of this was $9.0 billion (23.1 percent of total public funding and 17.1 percent of all funding). Local jurisdictions provided the bulk of transit funds, $18.5 billion in 2008, or 47.5 percent of total public funds and 35.1 percent of all funding. In 2008, total public transit agency expenditures for capital investment were $16.1 billion and accounted for 41.5 percent of total available funds. Federal funds were $6.4 billion in 2008, 39.8 percent of total transit agency capital expenditures. State funds provided an additional 12.4 percent and local funds provided the remaining 47.8 percent of total transit agency capital expenditures. Of total 2008 transit capital expenditures, 76.4 percent ($12.3 billion) was invested in rail modes of transportation, compared with 23.6 percent ($3.8 billion) invested in nonrail modes. This investment distribution has been consistent over the last decade. In 2008, $36.4 billion was available for transit operating expenses (wages, salaries, fuel, spare parts, preventive maintenance, support services, and leases). The Federal share of this has declined from the 2006 high of 8.2 percent to 7.1 percent. Similarly, the share generated from system revenues has decreased from 40.3 percent in 2006 to 37.6 percent. These decreases have been offset by the State share, which has increased from 22.5 percent in 2006 to 25.8 percent. The local share of operating expenditures has been close to 2008’s 29.7 percent for several years. The average annual increase in operating expenditures per vehicle revenue mile for all modes combined between 2000 and 2008 was 4.1 percent. In 2008, the average operating expenditure across all transit modes was $8.60 per vehicle revenue mile. Analysis of National Transit Database reports for the largest 10 transit agencies (by ridership) shows that the growth in operating expenses is led by the cost of fringe benefits (36.0 percent of all operating costs for these agencies), which have been going up at a rate of 3.4 percent per year above inflation (constant dollars) since 2000. By comparison, average salaries at these ten agencies grew at an inflation-adjusted rate of only 0.1 percent per year in that period. Operating expenditures per passenger mile for all transit modes combined increased at an average annual rate of 4.3 percent between 2000 and 2008 (from $0.44 to $0.62). The methods and assumptions used to analyze future highway, bridge, and transit investment scenarios for this report have evolved over time, to incorporate current research, new data sources, and improved estimation techniques relying on economic principles. Traditional engineering-based analytical tools focus mainly on estimating transportation agency costs and the value of resources required to maintain or improve the conditions and performance of infrastructure. This type of analytical approach can provide valuable information about the cost effectiveness of transportation system investments from the public agency perspective, including the optimal pattern of investment to minimize life-cycle costs. However, this approach does not fully consider the potential benefits to users of transportation services from maintaining or improving the conditions and performance of transportation infrastructure. The investment/performance analyses presented in Chapters 7 through 10 were developed using the Highway Economic Requirements System (HERS), the National Bridge Investment Analysis System (NBIAS), and the Transit Economic Requirements Model (TERM). Each of these tools has a broader focus than traditional engineering-based models and takes into account the value of services that transportation infrastructure provides to its users as well as some of the impacts of transportation activity on non-users. The methodologies used to analyze investment for highways, bridges, and transit are detailed in Appendices A, B, and C. For purposes of computing a benefit-cost ratio for a transportation project, the “cost” (the denominator) is conventionally measured as the capital expenditures required to carry out the project. The “benefits” (the numerator) are generally measured in terms of reductions in costs experienced by (1) transportation agencies (such as for maintenance), (2) users of the transportation system (such as savings in travel time or vehicle operating costs, or reductions in crashes), and (3) others who are affected by the operation of the transportation system (such as reductions in environmental or other societal costs). Increases in any of these types of costs are treated as negative benefits. An economics-based approach will likely result in different decisions about the catalog of desirable improvements than would a purely engineering-based approach. For example, if a highway segment, bridge, or transit system is greatly underutilized, benefit-cost analysis might suggest that it would not be worthwhile to fully preserve its condition or to address its engineering deficiencies. Conversely, a model based on economic analysis might recommend additional investments to expand capacity or improve travel conditions above and beyond the levels dictated by an analysis that simply minimized engineering life-cycle costs, if doing so would provide sufficient benefits to the users of the system. These types of considerations can potentially influence the establishment of standards as to what constitutes a “State of Good Repair” for different types of transportation assets. An economics-based approach also provides a more sophisticated method for prioritizing potential improvement options when funding is constrained. By ranking investment opportunities in order of their benefit-cost ratios, economic analysis helps provide guidance in directing limited resources toward those improvements that provide the largest benefits to transportation system users. Projects selected for implementation can be limited to those having a benefit-cost ratio above the threshold that would result in all available funds being used; projects that produce lesser net benefits can be deferred for future consideration. HERS, NBIAS, and TERM each use benefit-cost analysis as part of their decision-making process, but their approaches are very different. Each model relies on separate databases, making use of specific data available for only one part of the transportation network and addressing issues unique to that particular mode. The models have not evolved to the point where direct multimodal analysis is possible. Chapter 7 analyzes the projected impacts of different levels of future capital investment on a series of measures of physical condition, operational performance, and other benefits to system users. These levels are described in terms of both average annual investment levels over 20 years, and the annual rate of increase or decrease in constant dollar investment that could generate these levels. Chapter 8 presents a set of illustrative 20-year capital investment scenarios building upon the analysis presented in Chapter 7. The Department does not endorse or recommend any particular scenario. The investment levels associated with each scenario represent hypothetical levels of combined capital spending nationwide; the scenarios do not identify how much might be contributed by each level of government or from private sources to support such spending. Some of these scenarios are oriented toward achieving a particular level of system performance. In considering the future system performance impacts identified for each scenario, it is important to note that they represent hypothetical models of what could be achievable assuming a particular level of investment rather than what would be achieved in reality. While the economics-based approach applied in HERS, NBIAS, and TERM would suggest that projects be implemented in order based on their benefit-cost ratios until the funding available under a given scenario is exhausted, the reality is that other factors influence Federal, State, and local decision making. If some projects with lower benefit-cost ratios were carried out in favor of projects with higher benefit-cost ratios, then the actual amount of investment required to achieve any given level of performance would be higher than the amount predicted in this report. Further, several assumptions, estimates, and projections are used to derive the investment scenarios and no effort to assess the predictive value of these models has been undertaken to date. As in any modeling process, simplifying assumptions have been adopted to make analysis practical and report within the limitations of available data. Other scenarios are defined around funding all potential investments above a specified benefit-cost ratio threshold. It is important to note that simply increasing spending to the levels identified in these scenarios would not in itself guarantee that these funds would be expended in a cost-beneficial manner. Also, some potential capital investments selected by the models may be infeasible as a practical matter due to factors beyond those considered in the models. Because of this, the supply of feasible cost-beneficial projects could be exhausted at a lower level of investment than that indicated by these scenarios, and the projected improvements to future conditions and performance associated with these scenarios may not be fully obtainable in practice. Chapter 9 provides supplemental scenario analyses, including comparisons of recent system performance and funding trends with projected future needs in order to identify consistencies and inconsistencies between what has occurred in the past and what is expected for the future. In addition, projections from selected prior editions are compared with actual spending and outcomes over time. Issues relating to the interpretation of scenarios, including the timing of future investment and the conversion of scenarios from constant dollars to nominal dollars, are also explored. Chapter 9 includes a set of supplemental analyses that assume that any increases in highway and bridge spending above 2008 levels would be funded from user charges imposed on either a per-mile or a per-gallon basis. The general effect of such charges is to reduce future travel and reduce the projected level of investment needed to achieve a particular performance objective. These analyses also examine the potential impacts that the widespread adoption of congestion pricing might be expected to have on the level of investment required to achieve certain levels of future conditions and performance. Chapter 10 explores the impact that changing some key technical assumptions could have on the overall results projected by HERS, NBIAS, and TERM. Of the $91.1 billion of total capital outlay by all levels of government combined in 2008, $54.7 billion was used for types of capital improvements modeled in HERS, including pavement resurfacing, pavement reconstruction, and system expansion. (HERS models investments on Federal-aid highways only; $12.7 billion was spent on similar types of improvements to other roads.) In 2008, $12.8 billion was spent on improvement types modeled in NBIAS, including bridge repair, rehabilitation, and replacement. The remaining $11.0 billion went for system enhancements not captured by either model. Sustaining HERS-modeled capital spending on Federal-aid highways at its base year 2008 level in constant dollar terms for 20 years (i.e., an annual change in spending of zero percent) is projected to result in a worsening of overall system performance in 2028 relative to 2008, including a 2.8 percent increase in pavement roughness, and a 6.7 percent increase in average delay per VMT; if annual spending growth were negative, HERS projects even larger increases in pavement roughness and delay by 2028. HERS projects that if constant dollar spending were to grow by 5.90 percent per year, this would be sufficient to finance all potentially cost-beneficial capital improvements on Federal-aid highways by 2028; at this level of investment, average pavement roughness and delay are projected to improve by 24.3 percent and 7.7 percent, respectively, over the period 2008 through 2028. The NBIAS model estimates that there was a backlog of potentially cost-beneficial bridge investments in 2008 of $121.2 billion, of which $102.1 billion was on Federal-aid highway bridges, $60.4 billion was on NHS bridges, and $38.1 billion was on Interstate System bridges. (These figures do not include costs associated with system expansion modeled separately in HERS.) In the absence of future capital investment, this backlog would grow over time as existing bridges age. If spending by all levels of government for the types of improvements modeled in NBIAS were sustained at 2008 levels ($12.8 billion—all bridges; $9.4 billion—Federal-aid highway bridges; $5.4 billion—NHS bridges; $3.3 billion—Interstate System bridges) in constant dollar terms, NBIAS projects that this would be sufficient to reduce the backlog by 2028 for Interstate System bridges, NHS bridges, and all bridges; however, the backlog for Federal-aid highway bridges would increase by an estimated 6.5 percent, driven primarily by the subset of bridges on Federal-aid highways that are not on the NHS. NBIAS projects that eliminating the economic bridge investment backlog and addressing new bridge deficiencies as they arise over 20 years would require an annual increase in constant dollar spending of 4.31 percent for all bridges, 5.36 percent for Federal-aid highway bridges, 4.48 percent for NHS bridges, and 4.39 percent for Interstate System bridges. U.S. transit agencies spent a combined $16.1 billion in 2008 on capital improvements to the Nation’s transit infrastructure and vehicle fleets. This amount included $11.0 billion in the preservation (rehabilitation and replacement) of existing assets already in service and $5.1 billion to expand transit capacity—both to accommodate ridership growth and to improve performance for existing riders. Sustaining TERM-modeled transit capital spending at these base year 2008 levels for 20 years is projected to result in an overall decline in both transit system conditions and performance. This includes an overall deterioration in the average physical condition of the Nation’s stock of transit assets, with consequent performance impacts on service reliability and potentially on safety, an estimated 50 percent increase in the size of the “State of Good Repair” (SGR) backlog by 2030, and increases in vehicle crowding on the order of 5 to 30 percent (depending on the magnitude of ridership growth). For this edition of the report, the FTA developed an SGR benchmark scenario which estimates the investment required to attain and maintain a state of good repair for the Nation’s existing transit assets. Prior editions of this report included scenarios that were based on maintaining conditions or improving the condition of assets. Details of the new scenarios relative to past scenarios are provided in Chapter 9 and its Executive Summary. Accordingly, for the SGR benchmark scenario, TERM estimates the average annual level of 20-year investment required to eliminate the existing investment backlog and bring all existing transit assets to the SGR benchmark to be roughly $18.0 billion (without consideration of investment cost-effectiveness) and closer to $17.0 billion if limited to those asset reinvestments passing TERM’s cost-benefit analysis. Similarly, an additional $4.2 billion to $7.3 billion in annual expansion investments are required to maintain transit performance (as measured by vehicle crowding) at 2008 levels, depending on the actual rate of growth in ridership. When limited to urbanized areas (UZAs) with populations greater than 1 million, transit agencies expended $14.8 billion on capital projects in 2008, including $10.2 billion on asset preservation and $4.6 billion on transit capacity expansion. In contrast, the average annual investment level for these UZAs to attain SGR is estimated to be $15.6 billion over the next 20 years (without consideration of investment cost effectiveness) and closer to $14.5 billion to $15.1 billion if limited to those asset reinvestments passing TERM’s cost-benefit analysis. These scenarios suggest that an additional $2.6 billion to $6.1 billion are required to support projected increases in transit boardings while maintaining current service performance levels (as measured by the number of riders per peak vehicle). Transit agencies operating outside of UZAs with populations greater than 1 million expended $1.3 billion on capital projects in 2008, including $0.8 billion on preservation and $0.5 billion on asset expansion. In contrast, the average annual investment level for these smaller UZAs and all rural areas to attain SGR is estimated to be $2.4 billion over the next 20 years (or approximately $2.0 billion if limited to those reinvestments passing TERM’s benefit-cost analysis), while the level of average annual investment required to address both SGR and asset expansion needs of these smaller UZAs and rural areas is estimated to be between $2.5 billion and $2.8 billion, depending on the level of ridership growth. This report presents a set of illustrative 20-year capital investment scenarios; this report does not endorse any of these scenarios as a target level of funding, nor does it make any recommendations concerning future levels of Federal funding. The scenarios for highways and bridges build upon separate analyses developed using HERS and NBIAS and take into account other types of capital spending that are not currently modeled. The scenario criteria were applied separately to the Interstate System, the NHS, Federal-aid highways, and the highway system as a whole. The Sustain Current Spending scenario assumes that capital spending is sustained in constant dollar terms at base year 2008 levels between 2009 and 2028. (In other words, spending would rise by exactly the rate of inflation over that period). The Maintain Conditions and Performance scenario assumes that capital investment gradually changes in constant dollar terms over 20 years to the point at which selected measures of highway and bridge performance in 2028 are maintained at their base year 2008 levels. The average annual investment levels associated with meeting these goals are $24.3 billion for the Interstate System, $38.9 billion for the NHS, $80.1 billion for Federal-aid highways, and $101.0 billion for all roads. The cost to maintain value identified for the NHS is lower than the $42.0 billion spent by all levels of government combined on the NHS in 2008, indicating that sustaining NHS spending at 2008 levels could result in improved overall conditions and performance on the NHS. The Improve Conditions and Performance scenario assumes that capital investment gradually rises in constant dollar terms to the point at which all potentially cost-beneficial investments could be implemented by 2028. This scenario can be thought of as an “investment ceiling” above which it would not be cost-beneficial to invest. The average annual investment level for this scenario is $170.1 billion for all roads, 86.6 percent higher than actual spending in 2008. Of the $170.1 billion Improve Conditions and Performance scenario investment level for all roads, $85.1 billion (50 percent) would be directed toward improving the physical condition of existing infrastructure assets; this amount is identified as the State of Good Repair benchmark. The average annual State of Good Repair benchmark levels identified for Federal-aid highways, the NHS, and the Interstate System are $67.8 billion, $29.8 billion, and $16.2 billion, respectively. Investing at these levels could bring the share of Federal-aid highway VMT on pavements with good ride quality up from 46.4 percent in 2008 to 74.1 percent by 2028; the comparable percentages for the NHS and the Interstate System could be increased to 89.6 percent and 94.2 percent, respectively, by 2028. HERS projects that improving these measures beyond this point would not be cost-beneficial. This report presents a set of illustrative 20-year transit capital investment scenarios. The scenarios for transit capital needs build upon analyses developed using TERM and were applied separately to the Nation’s transit assets as a whole, as well as for two separate groupings of transit operators based on the size of the UZAs they serve. The Sustain Current Spending scenario assumes that capital spending is sustained in constant dollar terms at year 2008 levels between 2009 and 2028. Transit operators spent $16.1 billion on capital projects in 2008. Of this amount, $11.0 billion was devoted to the preservation of existing assets while the remaining $5.1 billion was dedicated to investment in asset expansion to support ongoing ridership growth and to improve service performance. This scenario considers the expected impact on the physical conditions and performance of the Nation’s transit infrastructure if these expenditure levels are sustained in constant dollar terms. TERM analysis suggests that sustaining spending at 2008 levels would likely yield an overall decline in transit conditions, an estimated 50 percent increase in the SGR backlog by 2030, and an increase in crowding on transit passenger vehicles. The State of Good Repair (SGR) benchmark estimates the level of annual capital investment required to eliminate the current transit investment backlog and then maintain all transit assets in a state of good repair thereafter, all without consideration of the cost-effectiveness of each investment (i.e., investments are not required to pass TERM’s benefit-cost test under this scenario). TERM estimates this annual level of investment to be $18.0 billion for the Nation as a whole. This includes $15.6 billion for UZAs with populations greater than 1 million (with most of these funds required for rail asset reinvestment), and $2.4 billion for the remaining smaller UZAs and rural areas currently served by transit. The Low Growth and High Growth scenarios consider the level of investment to address both asset SGR and service expansion needs subject to two differing potential levels of growth (and with all investments now required to pass a benefit-cost analysis). The Low Growth scenario assumes transit ridership will grow as projected by the Nation’s metropolitan planning organizations (MPOs), while the High Growth scenario assumes the average rate of growth (by UZA) as experienced in the industry since 1999. The Low Growth scenario assumes that ridership will grow at an annual rate of 1.4 percent over the 20-year period from 2008 to 2028; conversely, the High Growth scenario assumes that ridership will increase at a rate of 2.8 percent per year over that time frame. TERM estimates this average annual level of investment to be between $20.8 billion and $24.5 billion for the Nation as a whole between 2008 and 2028, including from $16.6 billion to $17.2 billion for asset preservation and $4.2 billion to $7.3 billion for expansion needs, depending on the realized rate of ridership growth. When limited to the UZAs with populations greater than 1 million, the average annual level of investment to address both SGR and expansion needs is $18.2 billion to $21.7 billion. The comparable range for the smaller UZAs and all rural areas with transit is $2.5 billion to $2.8 billion annually. * Note that totals may not sum due to rounding. As noted earlier, Chapter 8 includes scenarios for selected subsets of the overall highway system. The particular analyses from Chapter 9 discussed below apply to Federal-aid highways only, not to all roads. The goal of the Maintain Conditions and Performance scenario is to maintain overall conditions and performance for the lowest cost possible, without regard to how various system components might be affected. In practice, the conditions and performance of higher-ordered functional systems such as principal arterials tend to improve under this scenario, offset by some deterioration on lower-ordered systems. Maintaining pavement condition, bridge condition, and operational performance for each individual functional class would be more expensive. While the average annual investment level associated with the Maintain Conditions and Performance scenario for Federal-aid highways is $80.1 billion, maintaining these specific performance measures on individual functional systems would cost $88.8 billion per year. The baseline scenarios presented in this report assume no linkages between future investment needs and the types of financing mechanisms that might be utilized to address those needs. In reality, increasing user charges to support additional future spending would have an impact on the cost of driving, and hence would affect future VMT growth. The widespread adoption of congestion pricing would have a particularly significant impact on future system performance and investment needs. Of the $134.9 billion average annual investment level for the Improve Conditions and Performance scenario for Federal-aid highways, $105.4 billion was derived from HERS; assuming the widespread adoption of congestion pricing, HERS projects that an average annual investment level of only $73.8 billion would be needed to address all potentially cost-beneficial improvements. Prior editions of this report included scenarios that considered the level of investment required either to (1) maintain the condition of existing transit assets at current levels or to (2) improve the condition of those assets to an overall condition of “good” (i.e., 4.0 on TERM’s condition scale). For this edition, these “maintain” and “improve” conditions scenarios have been replaced by the SGR benchmark, which estimates the investment required to attain and maintain a state of good repair for the Nation’s existing transit assets. The SGR benchmark is financially unconstrained and considers the level of investment required to eliminate the current investment backlog and to address all reinvestment needs as they arise such that all asset conditions remain at 2.5 or higher on TERM’s condition scale. This change was found to have two key implications. First, analysis has determined that, given a high proportion of existing long-lived assets currently in good or excellent condition, it is not realistic or rational to attempt to maintain asset conditions at current levels over the next 20 years. Assuming transit operators follow reasonable asset rehabilitation and replacement policies, asset conditions are likely to decline (even as the proportion of assets not in SGR is reduced) until existing transit assets attain a “steady state” average condition value that reflects a given set of rehabilitation and replacement practices. Second, only a significant and ongoing investment in expansion assets can reverse this general downward trend in conditions. Moreover, it is just this type of ongoing expansion in new transit assets over the past two decades that has tended to reduce the rate of decline in average conditions across all transit assets (both new and existing). Analysis suggests that this effect has tended to mask somewhat the underlying decline in asset conditions for existing (as opposed to existing plus new) transit assets. Also in contrast to prior report editions, which only considered a single ridership growth projection, this edition assesses transit capital expansion under both low and high ridership growth outcomes. Specifically, the Low Growth scenario assumed UZA-specific rates of PMT growth projected by the Nation’s MPOs, while the High Growth scenario assumed the UZA-specific average annual compound rates based on historical growth rate averages. Analysis shows that historical rates of PMT growth have typically exceeded the MPO-projected rates of growth typically used for long-range transportation planning purposes. (In the past, the MPO-projected rates have been the only source of ridership growth estimates used to generate transit expansion needs in prior editions of this report.) For example, from 1992 to 2008, the historical compound annual PMT growth rate averaged roughly 2.1 percent compared with the 1.3 percent growth rate MPOs have projected for the upcoming 20-year period. Given the difference between the two growth rates (and the relatively high rate of historic PMT growth as compared with other measures, such as UZA population growth), the 2.1 percent historical growth rate of PMT was identified as a reasonable input value for the High (or higher) Growth scenario. Similarly, the 1.3 percent MPO-projected growth rate was used as an input value for the Low (or lower) Growth scenario. States provide forecasts of future VMT for each individual HPMS sample section evaluated in HERS; for 2008, the weighted average annual VMT growth rate based on these forecasts is 1.85 percent. HERS assumes that these forecasts represent the annual growth in travel over 20 years that would occur if a constant level of service is maintained on that facility. This assumption is reflected in the baseline analysis presented in this report, for which HERS estimates that an annual constant dollar spending increase of 5.90 percent could be sufficient to fund all potentially cost-beneficial investments by 2028, translating into an average annual investment level of $105.4 billion (compared with the $54.7 billion spent in 2008 on the types of capital spending modeled in HERS). To explore the possibility that traffic might grow more slowly than assumed, an alternative HERS analysis was conducted assuming for illustration that VMT will grow at the average annual rate of 1.23 percent, the historical average from 1998 to 2008. Modifying the input forecasts to match this VMT growth rate would reduce the benefits associated with pavement and capacity improvements, so that an annual spending increase of only 3.52 percent (translating into an average annual investment level of $80.2 billion) would be sufficient to fund all potentially cost-beneficial projects by 2028. If spending were instead sustained at 2008 levels, HERS projects that average speeds would improve by 2.1 percent under this alternative compared with a decline of 0.7 percent under the baseline assumptions. Another sensitivity test concerns the growth rate between 2008 and 2028 in motor fuel prices relative to general rate of inflation. The baseline HERS assumption is of no difference between these rates. An alternative assumption was based on the High Oil Price case from the Energy Information Administration, Annual Energy Outlook 2010. In this case, the ratio of gasoline prices to the consumer price index nearly regains its 2008 level by 2012 and increases thereafter through 2028 at the equivalent of 3.4 percent annually. The change in assumption from the baseline case causes HERS to reduce its projection of future travel growth and reduces the model’s estimate of the average annual investment level needed to fund all projects with a benefit-cost ratio of 1.0 or higher by 2028 to $96.9 billion. Increases in travel time clearly impose costs on drivers, but it is difficult to precisely quantify the value of time, much less forecast changes. Increasing the baseline estimate of the value of time by 25 percent would cause HERS to attribute more benefits to projects (particularly widening projects) that would result in travel time savings. This in turn would increase the estimate of potentially cost-beneficial investment to $114.0 billion per year. The HERS and NBIAS models each apply a discount rate to future benefits to reflect the implicit cost associated with directing resources to improve highways or bridges that could otherwise be used elsewhere in the public or private sector. Reducing the discount rate from the baseline 7 percent to 3 percent (reflecting lower interest rates) would increase the HERS estimate of the average annual investment level needed to fund all potentially cost-beneficial projects to $129.0 billion. The comparable average annual investment level projected by NBIAS for all bridges would be $24.8 billion assuming a 3 percent discount rate, about 21 percent more than the $20.5 billion baseline value computed based on a 7 percent discount rate. TERM relies on a number of key input values, variations of which can significantly impact the value of TERM’s capital needs projections. Each of the three unconstrained investment scenarios examined in Chapter 8—including the SGR benchmark and the Low Growth and High Growth scenarios—assumes that assets are replaced at a condition rating of 2.50 as determined by TERM’s asset condition decay curves. Analysis suggests that each of these scenarios is sensitive to changes in this replacement condition threshold, with the sensitivity increasing disproportionally the higher the replacement condition threshold is increased. For example, reducing the condition threshold to 2.25 tends to reduce preservation needs by just under $2 billion (close to 10 percent). In contrast, increasing the threshold to 2.75 increases preservation needs by more than $3 billion (just under 20 percent), while a further threshold increase to 3.00 increases preservation needs by nearly $8 billion (over 40 percent). This increasing sensitivity reflects the fact that ongoing, equal incremental changes to the replacement condition threshold yield greater proportionate reductions in the length of the asset life cycles as higher replacement condition values are reached. Needs estimates for scenarios employing TERM’s benefit-cost analysis are also particularly sensitive to changes in capital costs (assuming no comparable increase in benefits), as these increases tend to reduce the value of the benefit-cost ratio, causing some previously acceptable projects to fail this test. For example, a 25 percent increase in capital costs increases investment costs by just under $3 billion (nearly 14 percent) for the Low Growth scenario and by just under $4 billion (over 15 percent) for the High Growth scenario. In contrast, needs under the SGR benchmark (which does not utilize TERM’s benefit-cost test) increase by more than $4 billion (precisely 25 percent) in response to a 25 percent increase in capital costs. The most significant source of transit investment benefits as assessed by TERM’s benefit-cost analysis is the net cost savings to users of transit services, a key component of which is the value of travel time savings. Consequently, the per-hour value of travel time for transit riders is a key driver of total investment benefits for scenarios that employ TERM’s benefit-cost test. For example, a doubling of the value of time increases total needs for the Low Growth and High Growth scenarios by approximately $2 to $3 billion (8 to 10 percent) due to the increase in total benefits relative to costs. Similarly, a halving of the value of time decreases total investment needs for these scenarios by approximately $3 billion each (12 to 14 percent. Finally, TERM’s benefit-cost test is responsive to the discount rate used to calculate the present value of the streams of investment costs and benefits. For example, reducing the discount rate from the base rate of 7 percent to 3 percent yields approximately $1 to $2 billion (6 to 8 percent) increase in total investment needs under the Low Growth and High Growth scenarios, respectively. * Multiplier values expressed in 2003 dollars. The 1987 United Nations (UN) World Commission on Environment and Development defined sustainability as “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” While other organizations have defined sustainability differently, a common concept that has emerged is the “triple bottom line,” referring to the economy, the environment, and society. In transportation, the triple bottom line relates to sustainable solutions for the natural environmental systems surrounding the transportation system, the economic efficiency of the system, and societal needs (e.g., mobility, accessibility, and safety). Transportation is crucial to our economy and quality of life, but the process of building, operating, and maintaining transportation systems has environmental consequences. Fostering more environmentally sustainable approaches to transportation is essential in order to avoid negative impacts in the near term and to ensure that future generations will be able to enjoy the same or better standards of living and mobility as exist today. Sustainable transportation focuses on environmental impacts such as improved energy efficiency, reduced dependence on oil, reduced greenhouse gas (GHG) emissions, and other improvements to the natural environment involving air quality and water quality. From a sustainability perspective, the heavy reliance of the transportation system on fossil fuels is of significant concern, as they are non-renewable; generate air pollution; and contribute to the buildup of carbon dioxide (CO2) and other GHGs, which trap heat in the Earth’s atmosphere. The United States has relatively high GHG emissions per capita, even compared with other similarly affluent countries. The transportation sector consumes 29 percent of the total energy used in the United States; this represents 5 percent of global GHG emission. Over the past four decades, progress has been made in reducing emissions of air pollutants both nationally and from the transportation sector in particular. However, many Americans continue to live in regions that exceed health-based air-quality standards. To seek more sustainable options, transportation programs will need to focus on designing, constructing, maintaining, and operating infrastructure in ways that accommodate multiple modes of transportation, promote connectivity, and minimize environmental impacts. At this time there is no widely recognized and accepted method for measuring sustainability in the transportation community. One of the challenges is the need to shift from operations-focused performance measures to more holistic indicators, even if they are more difficult to quantify. At the Federal level, environmental sustainability has been adopted as a strategic goal in the U.S. DOT Strategic Plan 2010-2015. At the State level, transportation agencies are developing metrics that address various aspects of sustainability and monitoring progress toward specific goals—often in their long-range and project-level planning process. Some potential measures that have been identified for assessing progress in improving sustainability relate to reducing GHG emissions, improving system efficiency, reducing the growth of VMT, transitioning to fuel-efficient vehicles and alternative fuels, and increasing the use of recycled materials in transportation. The transportation planning process provides a forum for discussion of environmental, economic, and community concerns and can facilitate the inclusion of sustainability considerations into transportation projects. One example of efforts to respond to the challenge of creating a sustainable transportation system is the increased use of context sensitive solutions (CSS). A CSS approach requires that transportation planning consider the interaction between transportation systems and tailor them to the local area’s human, cultural, and natural environment. Climate change has received increased attention over the last decade, with a key concern being the impact on people and the planet. For the transportation community, policies to address climate change focus on GHG mitigation and climate change adaptation. Climate change adaptation focuses on anticipating potential future changes (e.g., higher sea levels, increased temperatures, altered precipitation patterns, greater storm intensity) and the potential impact on transportation (e.g., damaged or flooded facilities). Research efforts regarding the potential impacts of climate change on highway infrastructure are ongoing. U.S. DOT released a report on projected changes in climate over the century, used geographical information systems to map areas with transportation infrastructure along the Atlantic coast that will be potentially vulnerable to sea level rise, and is conducting a second adaptation study focused on the Gulf Coast region. These studies identify potential climate change impacts that are widespread and modally diverse and that would stress transportation systems in ways beyond which they were designed. Temperature and sea levels have risen in recent decades, and these rates of change may accelerate in the future as GHG concentrations rise. Climate change has the potential to cause real damage to transportation infrastructure and services. Transportation agencies across the Nation are addressing climate change mitigation issues at various levels; however, the issue of adapting transportation infrastructure to climate change impacts has received less widespread attention. Discussions to date have focused primarily on coastal States. Adapting to the impacts of climate change starts with inventorying critical infrastructure, understanding potential future climate change impacts, and assessing vulnerabilities and risks. Once adaptation needs are assessed, adaptation options can be classified in one of five broad categories. “Maintain, manage, and operate” strategies make no changes to the base transportation facility and focus on repairing damages as they occur. A “protect and strengthen” approach involves proactively strengthening a facility to meet new design standards that can withstand climate change effects. “Relocate and avoid” strategies move existing facilities to areas less threatened by climate change. An “abandon and disinvest” approach involves discontinuing service on facilities when it is no longer financially feasible to continue investment in them given current or potential threats. “Promote redundancy” strategies are aimed at adding assets that could serve as backup facilities if primary facilities fail. A critical obstacle to creating adaptation strategies is the lack of adequate information on how and when the climate will change. Without this type of information, assessment of risk and designing development strategies are difficult. Transportation design, maintenance, and replacement will need to be more flexible to incorporate climate adaptation considerations. Adaptation activities are underway at both the Federal and State levels. The U.S. DOT is working to develop models to assess and identify climate change vulnerabilities and risks to critical transportation assets. Additional studies on regional impacts of climate changes are also in process. At the State level, climate change adaptation action plans to consider necessary adaptation and mitigation strategies are being developed by several States. Fostering livable communities—places where transportation, housing, and development have been coordinated to provide access to adequate, affordable, and environmentally sustainable transportation options—is a goal of the U.S. DOT. Transportation plays an important role in creating safer, healthier communities with the strong economies needed to support our families. A key component of livable communities is having transportation choices. A multimodal system that integrates walking, bicycling, transit, and automobile access provides residents with more choices of where to live, work, and play. Integrating land use planning with transportation improves livability by fostering a balance of mixed-use neighborhoods that recognizes the importance of proximity, layout, and design to help keep people close to home, work, services, and recreation. Communities across the United States have begun tracking the implementation process and accessibility outcomes of livability investments that expand transportation options. However, it is easier to articulate the benefits of livable communities than to quantify them; work is continuing to reach a consensus in terms of what data should be collected on a consistent basis nationwide to track progress in improving livability. Increase the number of States with policies that improve transportation choices for walking and bicycling from 21 in 2010 to 23 in 2012. Increase access to convenient and affordable transportation choices as reflected by the average percentage change in transit boarding per transit market by 2.0 percent per year from 2010 to 2012. Improve access to transportation for special needs populations as reflected by the percentage of bus fleets compliant with the Americans with Disabilities Act (ADA) from 97 percent in 2007 to 98 percent in 2012 and increase the percentage of key rail stations that are ADA compliant from 93 to 95 percent between 2007 and 2012.
2019-04-22T00:07:50Z
https://www.fhwa.dot.gov/policy/2010cpr/execsum.cfm
Previously, drug-based synchronization procedures were used for characterizing the cell cycle dependent transcriptional program. However, these synchronization methods result in growth imbalance and alteration of the cell cycle machinery. DNA content-based fluorescence activated cell sorting (FACS) is able to sort the different cell cycle phases without perturbing the cell cycle. MiRNAs are key transcriptional regulators of the cell cycle, however, their expression dynamics during cell cycle has not been explored. Following an optimized FACS, a complex initiative of high throughput platforms (microarray, Taqman Low Density Array, small RNA sequencing) were performed to study gene and miRNA expression profiles of cell cycle sorted human cells originating from different tissues. Validation of high throughput data was performed using quantitative real time PCR. Protein expression was detected by Western blot. Complex statistics and pathway analysis were also applied. Beyond confirming the previously described cell cycle transcriptional program, cell cycle dependently expressed genes showed a higher expression independently from the cell cycle phase and a lower amplitude of dynamic changes in cancer cells as compared to untransformed fibroblasts. Contrary to mRNA changes, miRNA expression was stable throughout the cell cycle. Cell cycle sorting is a synchronization-free method for the proper analysis of cell cycle dynamics. Altered dynamic expression of universal cell cycle genes in cancer cells reflects the transformed cell cycle machinery. Stable miRNA expression during cell cycle progression may suggest that dynamical miRNA-dependent regulation may be of less importance in short term regulations during the cell cycle. The fine-tuned mechanisms of cell cycle have always been in the focus of cancer research resulting in a better understanding and optimization of the action of several chemotherapeutic agents [1–3]. Both posttranslational modifications of proteins (e.g. protein-protein interactions, phosphorylations) and altered transcriptional activity of specific genes contribute to the tightly controlled regulation of the cell cycle . Analysis of mRNA transcripts expressed in a cell cycle dependent manner using high-throughput screening methods have identified numerous genes used for differentiating malignant tumors from benign lesions [4–6]. These correlations were presumably due to the possibility that cell cycle dynamics accelerate in malignant tumors, resulting in a larger proportion of cells residing in S and G2 phases . Former approaches for detecting transcripts expressed in a cell cycle dependent manner used several synchronization techniques for arresting the cell cycle at a certain point. Among others, serum starvation, double thymidine block, and thymidine-nocodazole block halted the cell cycle in cultured cells at G0, early S and M phases, respectively [4–6]. After removal of the synchronizing agent, time-course gene expression data followed by adequate bioinformatics analysis were used for identifying the cell cycle regulated transcripts [4–6]. However, several conflicting arguments have been raised concerning the usage of these synchronization procedures . Statistical re-examination of a former study surprisingly revealed that randomization of time-course gene expression data showed the same strong periodicity in the expression patterns as those obtained from the original synchonization experiment. Moreover, synchronization procedures in general , and DNA replication inhibitors such as thymidine in particular, result in the perturbance of cell cycle machinery, producing growth imbalance and unscheduled expression of cyclins [8, 9]. Additionally, cells may lose their synchronization relatively soon after the release from the synchronizing agent [5, 10] and only a subset of cells reenter the cell cycle after arrest [5, 11]. Therefore, a set of criteria has been introduced for the analysis of cell cycle dependent transcripts . Accordingly, the expression pattern of a certain gene can be introduced as cell cycle dependent if (i) no inhibition or starvation method was used for synchronization, (ii) results are reproducible over several experiments (iii) results of additional methods other than microarrays (e.g. Northern blot, quantitative real-time polymerase chain reaction – qRT-PCR) support the findings (iv) expression patterns are confirmed in non-synchronized experiments (e.g. cells separated by size or DNA content) and (v) statistically robust analysis supports the results . The regulation of the cell cycle in general and the cell cycle dependent transcriptional program in particular is the consequence of the precise interactions between cyclin-cyclin dependent kinase complexes and an oscillating network of transcription factors [12, 13]. Additionally, epigenetic mechanisms as microRNA-mediated regulations contribute to proper cell cycle regulation. MicroRNAs (miRNAs) are short, ~22 nt long noncoding RNA molecules regulating gene expression on the post-transcriptional level targeting the 3’ untranslated regions of mRNAs . Extensive complementarity results in mRNA degradation, whereas in the case of short complementarity, transcriptional silencing is achieved by transcriptional repression . Among other physiological functions, the importance of miRNA-dependent gene regulation has been confirmed in several key members of the cell cycle machinery [16–18], contributing to miRNA-dependent cell cycle changes [17, 19]. Altered expression of cell cycle-controlling miRNAs has been reported in neoplasms of various tissues [18, 20, 21]. Additionally, dynamic miRNA expression changes have been observed during exit from quiescent state due to serum reintegration into the culture medium of serum starved cells [22, 23]. In particular, elevated expression of E2F1 and E2F3 in response to mitogenic stimuli have been shown to enhance the expression of its transcriptional targets: hsa-let-7 and hsa-miR-16 family members . Moreover, E2F1 has been shown to enhance hsa-miR-15 expression, which inhibits cyclin E, one of the key transcriptional targets of E2F1 . Accordingly, it has been proposed that such feed-forward loops encompassing the E2F transcription factors, miRNAs and cyclins contribute to the fine-tuning of cell cycle regulation . However, the potential dynamic miRNA expression changes between the cell cycle phases of actively cycling cells without any synchronization or serum shock procedures have not been thoroughly investigated. Here, for the first time, we show that gene expression signature obtained from unperturbed cells sorted by fluorescence activated cell sorting (FACS) based on their DNA content at different phases of the cell cycle correlate well with former gene expression studies using synchronization methods. In addition to lower expression of cell cycle genes in different cell cycle phases, dynamic mRNA expression changes were found to be of greater amplitude in primary, untransformed fibroblasts as compared to those detected in cancer cell lines, reflecting the more precise cell cycle regulation in untransformed cells with lower proliferation characteristics. Using numerous high-throughput miRNA-screening methods, miRNA expression, unlike mRNA expression, was found to be quite stable throughout the cell cycle progression in various human cells. Our optimal cell cycle sorting was able to differentiate cells residing in various cell cycle phases in all of the three cell types used (HDFa, NCI-H295R and HeLa cells) (Fig. 1, panel a-c). The purity of cell cycle sorted populations varied between cell types and cell cycle phases (Additional file 1: Table S1), but based on FACS reanalysis, these sorted cell populations were still more homogenous than cells obtained after synchronization procedures . G1 phase was sorted most efficiently in all cell types with more than 95 % of purity in all cells. In NCI-H295R and HeLa cells, S phase cells showed more homogenous population as compared to cells in G2 phase. Optimization of the cell cycle sort was needed for the sake of achiveving high purity in sorting without damaging or perturbing physiological cell functions. In particular, the determination of upper limit of sorting time, the use of a specialized sort medium and the immediate re-analysis of sorted cells contributed to our results. Tyr15 phosphorylation of CDC-2 protein is a tightly controlled event in cell cycle progression , thus the respective amounts of phospho (Tyr15)-CDC-2 provide a general hallmark for each phase. Western blot analysis performed on protein extracts of sorted NCI-H295R and HeLa cells showed the well known phosphorylation patterns of CDC-2 (Fig. 1, panel d-e), confirming the purity of cell cycle sorting on protein level as well. The quantity and quality of isolated RNA from sorted cells were sufficient to perform high throughput gene expression screening (Additional file 2: Figure S1, Additional file 1: Table S1). Gene expression profiling, followed by rigorous statistical analysis detected 55 mRNA transcripts in NCI-H295R cells (Fig. 2, Panel a, Additional file 1: Table S3, panel B) and 252 mRNA transcripts in HeLa cells (Fig. 2, Panel b, Additional file 1: Table S3, panel C) to be expressed in a cell cycle dependent manner. Note that the majority of detected gene expression changes share a common manner: expression rises as cell cycle proceeds. Additionally, clustering showed that S and G2 phases’ expressional patterns are closer to each other than to G1 phase. Statistical analysis of HDFa microarray data failed to detect genes with significantly altered expression, however, the functional bioinformatics analysis of the gene expression changes of greater than a twofold change (FC > 2, Fig. 2, Panel f, Additional file 1: Table S3, panel A) supports the concept that these changes strongly influence cell cycle progression. Moreover, the successful qRT-PCR validation of the chosen FC > 2 genes (Fig. 2, Panel c) and the significant correlation of gene expression changes of cell cycle sorted with former synchronization-based experiments in primary fibroblasts (Fig. 3, Panel b) further confirm the relevance of our approach. Gene expression changes observed by qRT-PCR experiments of six genes chosen upon microarray analysis confirmed the microarray results in all of the three cells (Fig. 2, Panel c-e). Specifically, all the six genes chosen for the qPCR validation were present in the significant (NCI-H295R, HeLa) or FC > 2 (HDFa) lists. Moreover, ARHGAP11A, KIF14 and GTSE1 were previously found to be expressed in a cell cycle dependent fashion in primary fibroblasts and HeLa cells , while ASPM and SKA1 genes were found to be cell cycle regulated in primary fibroblasts . The successful validation of these well-known cell cycle genes in all three cell types analyzed here further confirms our cell cycle sorting method. Functional bioinformatics analysis was used to detect altered pathways based on our microarray results. As a further confirmation of our method, ”Cell cycle” ”Cellular assembly and organization” and ”DNA replication, recombination and repair” were the molecular and cellular functions most concerned by gene expression changes in all three cells (Fig. 2, Panel f-h). Several conflicting arguments arose on the applicability of synchronization procedures to define transcripts with cycling expression in unperturbed cells . Therefore we aimed to compare expression changes between cell cycle phases detected by gene expression profiling in synchronization and cell cycle sort based experiments. Because synchronization based time course gene expression data in adrenocortical cell line have not been previously published, comparisons were made with primary fibroblasts and HeLa cells. Pearson’s method showed significant correlation between gene expression changes observed in synchronization based and cell cycle sort based experiments, confirming previous synchronization experiments by a synchronization-free method in unperturbed cells (Fig. 3, Panel a-c, Additional file 2: Figures S3 and S4, Additional file 1: Table S4). Additionally, Gene Ontology (GO) Term analysis was performed on the HeLa cell cycle dependent transcriptional program to analyze the possible difference in biological processes affected by cell cycle sort and synchronization procedures. As both of cell cycle sort-based and synchronization-based results are only applicable in HeLa cells, we performed the analysis on three gene lists: genes unique to the HeLa cell cycle sort experiment (unique HeLa SORT), genes unique to the HeLa synchronization experiment (unique HeLa synchr) and the overlap between these two lists. All three lists were enriched with cell cycle-related processes; however, the overlap between the two experiments presented the most significant enrichment of cell cycle-associated biological processes, cross-validating important cell cycle genes detected by both the synchronization-based and cell cycle sort-based procedures. All the GO terms detected in the unique HeLa SORT list were detected in the overlap list, however, interestingly, five out of eight GO terms detected in the unique HeLa synchr list were unique to this list of genes, not being present in the analysis of the unique HeLa SORT or overlap gene lists (Table 1 and Additional file 1: Table S5). QRT-PCR validation of microarray experiments (Fig. 2, Panel c-e) indicated that gene expression changes might be characterized by different amplitudes in primary vs. cancer cells. Therefore, we analyzed the expression profiles and cell cycle dynamics of genes displaying altered expression between cell cycle phases in both primary untransformed (HDFa) and transformed cancer (HeLa) cells (127 genes present in both HDFa SORT and HeLa SORT gene lists, Fig. 3, Panel a). Significantly lower expression values were found in primary untransformed compared to cancer cells in G1, S and G2 phases as well (Fig. 3, Panel d-e). For the analysis of mRNA dynamics during cell cycle in untransformed and cancer cells, differences in mean fold changes of expression of genes commonly altered in HDFa and HeLa cell cycle sort experiments were calculated and evaluated (Fig. 3, Panel f-g). Among several significant alterations, a robust difference in mean fold change of gene expression was observed in G1/S transition between primary fibroblasts and cancer (NCI-H295R and HeLa) cells based on both microarray and qRT-PCR results. During the cell cycle, cycling genes had lower basal expression, but they demonstrated expression changes of significantly greater amplitude in primary non-transformed fibroblasts, than in transformed cancer (NCI-H295R and HeLa) cells. Three high-throughput platforms (microarray, TaqMan Low Density Array and Illumina small RNA Sequencing) of miRNA expression were used to detect cell cycle dependent miRNA expression (Fig. 4, Panel a-d, Additional file 2: Figure S5). Among them, microarray (Fig. 4, Panel a and c) displayed the lowest dynamic range and was unable to detect miRNAs of altered expression between cell cycle phases in HDFa and NCI-H295R cells. TaqMan Low Density array (Fig. 4, Panel b) performed on RNA isolated from sorted NCI-H295R cells detected 8 miRNAs of altered expression between cell cycle phases (among which only hsa-miR-10b, hsa-miR-128a and hsa-miR-890 had fold change values exceeding 2), however qRT-PCR validation of selected miRNAs failed to confirm the results (Additional file 2: Figure S6). Among the three platforms used in our study, small RNA sequencing was found to have the largest dynamic range in detection of miRNA expression alterations (Fig. 4, Panel d and Additional file 2: Figure S5). Still, statistical analysis detected only 11 miRNAs with altered expression in HeLa cells, of which only four miRNAs (hsa-miR-146b, hsa-miR-577, hsa-miR-877 and hsa-miR-193b*) had FC > 2 expression change between cell cycle phases. QRT-PCR measurements, similar to that of TLDA validation attempt, failed to validate differential expression in NCI-H295R and HeLa cells (Additional file 2: Figures S5 and S6). For further validation four other miRNAs showing stable expression in different cell types and cell cycle phases based on the high-throughput data were selected for qRT-PCR control analysis, which confirmed the stable expression pattern (Additional file 2: Figure S6). Among several cell cycle regulator miRNAs, members of the hsa-miR-16 family were found to display dynamic changes in expression between serum-starved G0 and actively proliferating state . Therefore, we analyzed expression changes of the hsa-miR-16 family members: hsa-miR-16, hsa-miR-15a and hsa-miR-503 in our high-throughput data (Fig. 4, Panel a-d and Additional file 2: Figure S5, Panel a) and performed qRT-PCR analysis as well (Fig. 4, Panel e-g). In the case of hsa-miR-15a, small RNA sequencing detected two-fold alteration in expression in NCI-H295R and HeLa cells as well and qRT-PCR analysis further confirmed some small, but significant expression changes in all three cell types. With the advent of high-throughput transcriptional profiling, the precise analysis of the cell cycle transcription program became possible. Time course gene expression data after various synchronization procedures were used for gene expression profiling on microarray platforms [4–6] which demonstrated the cell cycle transcription program in various cell types including untransformed primary [5, 6] and transformed cancer cells . However, questions were raised on the applicability of synchronization procedures in the analysis of cell cycle dependent transcriptional profiling . In fact, growth imbalance and unscheduled expression of key cell cycle factor cyclins were demonstrated due to synchronization procedures [8, 9]. Additionally, cells failed to retain their synchronization relatively soon after depriving the synchronization agent from growth medium [5, 10]. A rigorous set of criteria was introduced by Shedden and Cooper concerning the analysis of cell cycle dependent gene expression and, therefore, novel methods satisfying these criteria were needed. Centrifugal elutriation and cell cycle sort [26–28] are two synchronization-free methods satisfying the Shedden and Cooper criteria without perturbing the cell cycle machinery. However, to our knowledge no detailed study using these methods to determine cell cycle dependent gene expression in human cells has been published to date. Upon centrifugal elutriation, cells are differentiated upon the size of each cell , while during cell cycle sorting by FACS the amount of DNA in each cell is used for separation [27, 28]. Both methods achieved efficient separation of cell cycle phases [12, 13, 27–29]. Although proper segregation of cell cycle phases by cell cycle sort may be optimal in cells lacking aneuploidy, cells presenting aneuploidy may as well be introduced to cell cycle sort until proper further confirmation of cell cycle sort (detection of the different expression of key cell cycle regulators e.g. Tyr-15 phosphorylation of CDC-2 ) supports the results. In our results, dynamic changes of Tyr-15 phosphorylation of CDC-2 in HeLa show as great variability as in NCI-H295R cells, confirming successful segregation of cell cycle phases in HeLa cells as well. Here, using the cell cycle sort method, we report a comparative analysis of the cell cycle dependent gene transcriptional profile of human untransformed primary (fibroblasts) and cancer (adrenocortical and cervical) cells. Functional bioinformatics analysis revealed cell cycle related molecular and cellular functions to be mostly concerned with these transcriptional alterations. As time-course gene expression data from previous synchronization-based studies regarding primary human fibroblasts and HeLa cells were available, we performed a re-analysis of these data to compare synchronization-based and cell cycle sorting-based results. Upon the significant correlation between our data and these earlier results, we conclude that data obtained from cell cycle sort experiments confirm earlier results demonstrating cell cycle dependent gene expression in human cells, as well as it satisfies the rigorous criteria described above . Moreover, GO term analysis was performed to assess biological processes related to the HeLa cell cycle dependent transcriptional program. The overlap between HeLa cell cycle sort and HeLa synchronization experiments showed a robust enrichment of cell cycle-related GO terms, cross-validating the key players of the cell cycle dependent transcriptional program. Additionally, cell cycle-related biological processes were enriched in both the unique to HeLa cell cycle SORT and the unique to HeLa synchonization gene lists. However, interestingly, the majority of GO terms detected in the unique HeLa synchronization gene list were absent from the overlap gene list, indicating some specific mechanisms related to synchronization procedures. The specific presence of ”response to DNA damage stimulus” and ”cellular response to stress” and the induction of ”DNA repair” GO terms confirms the replication stress as a consequence of synchronization procedures [8, 9, 30]. These analyses further confirm that our synhronization-free method of cell cycle sort charactherises more specifically the cell cycle of unperturbed cells than the synchronization-based methods. We also investigated the eventual difference of cell cycle regulated transcriptional program in human untransformed and cancer cells. Whitfield et al. demonstrated that genes exhibiting cell cycle regulated expression were overexpressed in malignant tumors reflecting the malignancy signature of neoplasms . This was explained by the fact that tumors contain more cycling cells . Cell cycle dynamics alter disproportionally during malignant transformation : activation of oncogenes HRAS, SRC, MYC, CCND1, CCNE [32–34], and loss of tumor suppressor genes as PTEN shortens G1 phase , while loss of key M phase regulators LZTS1 and LATS2 results in M phase shortening [31, 36, 37]. These alterations lead to a relatively larger portion of cells residing in S and G2 phases. Additionally, certain gene clusters were confirmed to exhibit cell cycle dependent expression in either primary untransformed or transformed cancer cells , differentiating cells upon malignant transformation. Our results contribute to the notion of different transcriptional regulation in untransformed and cancer cells. Since we have analyzed only three human cell types of different tissue origin, we can not draw a definitive conclusion universal to the cell cycle effects of malignant transformation. However, based on our analysis we may hypothesize that genes displaying universal cell cycle dependent expression in untransformed and cancer cells display altered expression in each phase and dynamic changes of different amplitude (Fig. 5). MRNA expression was found to be higher in G1, S and G2 phases as well, therefore, in addition to altered cell cycle distribution, basal, phase-independent up-regulation of these cell cycle genes may as well constitute to the well observed higher expression in malignant cancers. Dynamic mRNA expression differences between G1 and S phases were of greater amplitude in untransformed primary cells than in cells undergoing malignant transformation. This may be explained by the longer and more tightly controlled G1 phase and G1/S transition observed in untransformed, primary cells , as it reflects the more precisely regulated cell cycle machinery in untransformed cells. Moreover, MYC amplification stimulates E2F expression in cancer cells, facilitating the commitment to cell division [38, 39]. This facilitated regulation of the G1/S transition may as well contribute to smaller expression changes of the cell cycle dependent transcriptional program. MiRNAs have a well established role in the regulation of the cell cycle . Oncogenic (onco-miRs) and tumor-suppressor (TS-miRs) miRNAs were confirmed as modifiers of key cell cycle agents, accelerating or decelerating cell cycle progression [40, 41]. Long-term miRNA-mediated cell cycle changes contribute to malignant transformation in a variety of neoplasms [19, 42–46]. Additionally, the role of miRNA-mediated regulation has been confirmed in the transition from quiescent state to actively proliferating state [19, 23]. In particular, mitogenic stimuli enhances cell cycle progression by stimulating key transcriptional factors of the E2F family, which in turn enhances members of the hsa-let-7 and hsa-miR-16 families [17, 22, 23]. These well-known TS-miRs target key cell cycle cyclins as cyclin E, fine-tuning the proper cell cycle progression . However, the proposed cell cycle dependent miRNA expression pattern has not been thoroughly investigated, as according to our knowledge only one synchronization study identifying some cell cycle regulated miRNAs has been published to date . Our study aiming to detect miRNA expression changes between cell cycle phases included the application of miRNA microarray, qPCR-based TLDA and Illumina small RNA sequencing. Microarrays are widely used for high throughput miRNA profiling and produce results which can be validated in high percentage by qPCR . However, obtaining negative results prompted us for further analysis using qPCR-based TLDA and Illumina small RNA sequencing. The latter approach has larger dynamic range of detection allowing us to successfully detect smaller, but significant alterations [49, 50]. QPCR-based TLDA results however can be most successfully validated by single tube individual miRNA-specific qPCR as primer sequences used in TLDA does not differ. Analysis of the three human cells (two cancer cell lines and one primary cell) on three high-throughput miRNA expression platform in our study revealed that miRNA expression profile throughout the cell cycle phases was quite stable (Fig. 5). Surprisingly, our systematic study using multiple high-throughput platforms indicated the lack of validable cell cycle dependent miRNA expression, and also showed that fold change differences are of small amplitude, especially in the light of the robust and explicit changes observed in mRNA expressions of the very same cell stage samples. More than 50 % of miRNA genes are located in cancer-associated genomic regions or in fragile sites, being continuously downregulated or deleted in cancers [40, 51]. Therefore, the loss of TS-miRs and the activation of onco-miRs are specifically involved in the long-term malignant transformation. Moreover, with the loss of genetic regions containing miRNA genes, the possibility of their dynamic regulatory functions throughout the cell cycle is lost as well [40, 51]. Additionally, miRNA-dependent gene regulation was found to be a much slower process than previously thought, due to certain bottlenecks related to the complex biogenesis and maturation processes or delays of miRNAs loading into Argonaute proteins . Accelerated turnover was proposed to be necessary for certain miRNAs to be possibly involved in dynamic cell cycle regulation . Such accelerated turnover has been confirmed in the case of the hsa-miR-16 family . This family has been identified as a cluster of TS-miRs , downregulated in various types of cancers [19, 42–44] and associated with quiescent state . Our results indicated some minor miRNA expression changes, especially in the case of hsa-miR-15a, however, expression changes were not fully congruent in the three cell types studied, suggesting that the well known, cell type specific expression of miRNAs may contribute to this phenomenon. Finally, it is of utmost importance to address the limitations of our study. Firstly, upon the analysis of three human cell types of different tissue origin we can not draw a general conclusion concerning the difference of the expression dynamics of the universal cell cycle genes. Secondly, although different culture conditions may have some effect on the observed cell cycle differences, our method has much less effects on cell cycle alterations compared to previous serum shock-based or inhibitive synchronization-based processes. In conclusion, successful utilization of cell cycle sort as a novel method for the analysis of cell cycle transcriptional program in our study confirmed the previously identified cell cycle transcriptional regulation. Different phase-dependent and phase-independent mRNA expression dynamics of cell cycle genes in human untransformed and cancer cells were revealed, reflecting the altered cell cycle machinery in cancer cells at the transcriptional level. Perhaps more interestingly, the application of various high-throughput platforms (microarray, TLDA, Small RNA Sequencing) for miRNA profiling showed that miRNA expression dynamics are unaltered during the active cell cycle at the G1/S and S/G2 transitions. Human adrenocortical cancer cell line NCI-H295R and human cervical cancer cell line HeLa were obtained from the American Type Culture Collection (ATCC), while human dermal fibroblast (HDFa) cells were obtained from Gibco (Life Technologies). NCI-H295R cells were cultured in Dulbecco’s modified Eagle’s medium/Nutrient Mixture F-12 Ham (DMEM: F12) supplemented with 6.25 ng/ml insulin, 6.25 ng/ml transferrin, 6.25 ng/ml sodium selenite, 1.25 mg/ml bovine serum albumine, 5.35 ng/ml linoleic acid, 1 % HEPES, 1 % Penicillin-Streptomycin, 2.5 % L-glutamine (Sigma-Aldrich Chemical Co.) and 2.5 % Nu-Serum (BD Biosciences). HeLa cells were cultured in Dulbecco’s modified Eagle’s medium/Nutrient Mixture F-12 Ham (DMEM: F12, Sigma-Aldrich Chemical Co.) supplemented with 10 % fetal bovine serum (Gibco by Life Technologies) and 1 % antibiotic-antimycotic solution (Sigma-Aldrich Chemical Co.). HDFa cells were cultured in Medium 106 supplemented with low serum growth supplement (LSGS, Gibco by Life Technologies). All cells were cultured at 37 °C in a humidified 5 % CO2 atmosphere. HDFa, NCI-H295R and HeLa cells were cultured in 150 cm2 cell culture flasks until 90 % confluency. Cells were trypsinized, washed, resuspended in complete medium and counted. Vybrant DyeCycle Orange (Molecular Probes by Life Technologies) was used to stain genomic DNA stoichiometrically in living cells (approximate fluorescence excitation and emission maxima were 519 nm and 563 nm, respectively), and was added in 1:500 dilution to 1 × 106 cells/ml cell suspension. After incubation at 37 °C for 30 min, protected from light, cells were centrifuged at 1000 rpm for 10 min and were resuspended in the sort medium (Hank’s Balanced Salt Solution without Ca2+ and Mg2+, containing 2 % fetal calf serum). FACSAria III cell sorter (Becton-Dickinson, Franklin Lakes, NJ, USA) was used for cell cycle analysis and sorting using 488 nm Argon laser. The fluorescence emission of Vybrant DyeCycle Orange was separated by a 556 longpass filter and detected through a 585/42 bandpass filter. At least 100,000 events were collected for analysis. Upon cell cycle analysis, cell populations resembling G1, S and G2 phases were gated according to cellular DNA quantity. Sorting did not exceed 30 min and all sorted populations were validated by flow cytometry analysis. Data were analyzed by BD FACSDiva v6.1.3 software (BD Biosciences, San Jose, CA, USA). Thereafter, cells were centrifuged, washed with ice-cold PBS and resuspended in QIAzol lysis reagent (Qiagen) or Western blot lysis buffer for subsequent RNA or protein isolation, respectively. Until RNA isolation or Western blot, samples were stored at −80 °C. Optimization from the protocol supplied by the manufacturer included the use of sort medium, concentration of the cell suspension before FACS analysis, respecting an upper time limit for sorting and immediate FACS reanalysis upon every cell cycle-sorted population. Samples were thawed on ice, sonicated with ultrasound and incubated on ice for 30 min. Thereafter, samples were centrifuged at 13000 rpm and 2 °C for 15 min. Protein concentration was determined according by Bradford method using Varioskan Flash spectral scanning reader (Thermo Scientific) . Optical density was determined at 595 nm. Samples were mixed with β-mercaptoethanol containing Laemli buffer and were incubated at 99 °C for 5 min. Thereafter, equal amount of samples were loaded on a 10 % polyacrylamide gel and electrophoresis was conducted on a Mini Protean electrophoresis equipment (Bio-Rad). Overnight blotting at 4 °C was performed to transfer proteins to a PVDF membrane (Millipore, Billerica, MA). Blotting efficiency was determined by Ponceau staining. Membranes were blocked with 5 % non-fat dry milk in TBS for 60 min at room temperature, and were incubated with primary phospho-CDC-2 (Tyr15) antibody (Cell Signaling Technology, cat. No.: 9111, dilution: 1:500) at 4 °C for 16 h. Thereafter, membranes were washed 5 times with 0.05 % Tween-20 containing TBS, and were incubated with secondary antibody (Cell Signaling Technology, cat. No.: 7074, dilution: 1:2000). All antibodies were diluted in 1 % non-fat dry milk containing TBS. After exposure to SuperSignal West Pico Chemiluminescent Substrate (Thermo Scientific), signals were visualized by Kodak Image Station 4000MM Digital Imaging System. Thereafter membranes were stripped with mild stripping buffer (0.2 M glycine, 0.1 % sodium dodecyl sulfate, 0.1 % Tween-20, pH = 2.2) by gentle agitation for 45 min at room temperature, and were blocked again for subsequent detection of loading control β-actin (Cell Signaling Technology, cat. No.: 4967, dilution: 1:2000). Membrane blocking, antibody incubations and signal detection were carried out exactly as in the case of phospho-CDC-2 detection. Densitometry of the detected bands was performed by Kodak Image Station. β-actin was used as loading control. Total RNA was isolated using miRNeasy Mini Kit (Qiagen), according to the manufacturer’s instructions and was eluted in 50 uL nuclease-free water (Qiagen). RNA concentration and integrity was determined by the Agilent Bioanalyzer 2100 system (Agilent Technologies, Additional file 2: Figure S1, Additional file 1: Table S1). Gene expression profiling was performed on 100 ng RNA isolated from sorted G1, S and G2 phases of HDFa, NCI-H295R and HeLa cells. In all, 24 samples (2 or 3 samples of each phase) were analyzed using Agilent whole human genome 4x44K microarray slides (Agilent Technologies) following the manufacturer’s protocol . MiRNA expression profiling was performed on 100 ng RNA isolated from sorted G1, S and G2 phases of HDFa and NCI-H295R cells. In all, 16 samples (2 or 3 samples of each phase) were analyzed. The miRNA expression profiling using microarray followed the manufacturer’s protocol . Total RNA was labeled with Cy3 and amplified using Low Input Quick Amp Labeling Kit according to the manufacturer’s instructions. After RNA purification, labeled RNA was hybridized to Agilent 8 × 15 K Human miRNA Microarray Release 12.0. slides (Agilent Technologies), according to the manufacturer’s instructions. After washing, array scanning and feature extraction was performed by Agilent DNA Microarray Scanner and Feature Extraction Software 11.0.1. RNA isolated from two samples of G1 and three samples of S and G2 phase-sorted NCI-H295R cells were studied using TaqMan Low Density Array (TLDA) cards, according to the manufacturer’s instructions. The miRNA expression profiling using TLDA was performed as previously reported . 30 ng of total RNA was reverse transcribed and pre-amplified using Megaplex RT primer pool A and B and Megaplex PreAmp primers, respectively. Quantitative real-time PCR were carried out in TaqMan Human MicroRNA Array A and B on a 7900HT Real time PCR System (Applied Biosystems by Life Technologies). Two samples of each cell cycle phase of HeLa cells (six samples) and one sample of each cell cycle phase of pooled sorted NCI-H295R cells (threee samples) were analyzed. Small RNA sequencing was performed at BGI using Illumina Small RNA Sequencing Platform. For library preparation TruSeq Small RNA library preparation kit (Illumina, San Diego, California) was used. Sequencing was performed by SE50 with Illumina HiSeq2000, and 10 Mb clean reads were analyzed followed by routine algorithms (BGI Tech Solutions, Tai Po, Hong Kong). For the gene expression qRT-PCR experiments, 30 ng of total RNA was reverse transcibed using SuperScript VILO cDNA synthesis kit according to the manufacturer’s instructions (Applied Biosystems by Life Technologies). Gene expression was quantified using predesigned Taqman probes (Additional file 1: Table S2, Applied Biosystems by Life Technologies) on a 7500 Fast Real-time PCR system (Applied Biosystems by Life Technologies). Gene expression data were normalized to the relative expression of ACTB. For the miRNA expression qRT-PCR experiments, 5 ng of total RNA was reverse transcribed and quantified using TaqMan microRNA reverse transcription kit (Applied Biosystems by Life Technologies) and predesigned TaqMan probes (Additional file 1: Table S2, Applied Biosystems by Life Technologies) on a 7500 Fast Real-time PCR system (Applied Biosystems by Life Technologies). MiRNA expression data were normalized to the relative expression of RNU48. All measurements were performed in triplicate (three biological, two technical replicates). Expression level was calculated by the ΔCt(S-phase) – ΔCt(G1-phase) and the ΔCt(G2-phase) – ΔCt(G1-phase) (ΔΔCt) methods. Ingenuity Pathway Analysis (IPA, Ingenuity Systems) was used to detect molecular and cellular functions altered between cell cycle phases. Δ(G2-G1) gene expression changes of significantly differently expressed genes (NCI-H295R and HeLa) or genes with fold change > 2 expression (HDFa) were subjected to IPA core analysis. Two former microarray studies identifying cell cycle dependent expression of mRNA transcripts in human primary fibroblasts and HeLa cells using synchronization based procedures were selected. Processed data from these experiments were downloaded from http://genome-www.stanford.edu/Human-CellCycle/HeLa/ and from the European Bioinformatics Institute Array Express database (http://www.ebi.ac.uk/arrayexpress/experiments/E-TABM-263/) and were re-analyzed [4, 5]. Upon published FACS analysis data, time points with highest levels of synchronous populations of each cell cycle phase were chosen to represent G1, S and G2 phases, respectively. Difference in gene expression between phases was calculated upon difference of normalized expression of a certain gene between time points representing each phase. In these comparisons only those gene expression alterations were used where the cell cycle sort indicated cell cycle dependent gene expression changes (FC > 2 genes of HDFa and significant genes of HeLa experiment). Gene Ontology (GO) term analysis was performed to detect biological processes with enriched genes in the HeLa cell cycle transcriptional program. The online functional annotation tool of DAVID Bioinformatics Resources version 6.7 (https://david.ncifcrf.gov/) with Gene Ontology for biological processes (category: GOTERM_BP_FAT) was used. The input gene lists for the analysis were the genes unique to HeLa SORT experiment (HeLa SORT \ HeLa synchr), unique to HeLa synchronization experiment (HeLa synchr \ HeLa SORT) and the overlap between these two lists (HeLa SORT ∩ HeLa synchr). Bonferroni-corrected p-values < 0.05 were considered statistically significant . Analysis of gene expression levels and cell cycle dynamics in different cell types were performed by investigating the changes of 127 genes found to be cell cycle dependently expressed in HDFa and HeLa cells. Upon combined normalization of all cell cycle sort-based gene expression microarrays, normalized intensity values in each cell type in each cell cycle phase were compared. For the analysis of the gene expression dynamism during cell cycle progression the absolute values of fold changes between cell cycle phases were calculated and were subjected to comparison between HDFa, NCI-H295R and HeLa cell types. Results of qRT-PCR experiments in 10 (Additional file 1: Table S2) out of these 127 genes were also subjected to these analyses. ΔCt values normalized to ACTB expression and absolute values of fold changes in cell cycle phases were calculated and were compared in all cell types. Statistical analysis of the microarray data was performed by GeneSpring 12.6 (Agilent Technologies) software. Total signal normalization at the 75th percentile of raw signal values and baseline transformation at the median of all samples following Agilent’s recommendation were performed. Differently expressed genes between G1, S and G2 phases were detected by one-way ANOVA followed by Tukey’s Honestly Significant Difference post hoc test and Benjamini-Hochberg correction for multiple measurements. ΔCt levels of individually measured mRNA and miRNA transcripts obtained by qRT-PCR measurements and subsequent normalization to housekeeping transcripts (ACTB or RNU48) were subjected to Students’ two sided independent samples T-test. Differences were analyzed between G1-S, S-G2 and G1-G2 phases, respectively. Center values shown are the average of replicate experiments. On genes displaying cell cycle dependent expression revealed by cell cycle sort, Pearson’s correlation was used to calculate correlation between expression changes detected by different (cell cycle sort and various synchronization) methods. Student’s two-sided paired samples T-test was used to detect difference in normalized expression of genes expressed in a cell cycle dependent manner between various cell types. Student’s two-sided independent samples T-test was used to detect difference in absolute values of fold change of cell cycle dependently expressed genes of various cell types. In all comparisons p-value <0.05 was considered statistically significant. Statistical analysis for miRNA expression analysis of TLDA card was performed using Real-Time StatMinerTM software (Integromics, Granada, Spain). Expression level was calculated by the ΔΔCt method, and fold changes were obtained using the formula 2-ΔΔCt. Following quality control, expression levels were normalized to the geometric mean of all expressed miRNAs. One-way ANOVA was used to detect significantly altered expression. In all comparisons p-value <0.05 was considered statistically significant. For identification of differentially expressed miRNAs of Small RNA Sequencing experiments edgeR package version 3.8.6 in R was used. Alignment to MirBase version 21.0 mature miRNA database was performed on reads longer than 18 nucleotides with maximum 1 mismatch. The input data for edgeR package were the pair of phases (G1-S, S-G2, G1-G2) with two samples for each phases. The classical exact T-Test and TMM normalization were applied. Benjamini and Hochberg’s algorithm was used to control the false discovery rate (FDR). The difference was statistically significant when both the p-value and the FDR was <0.05. This work was supported by the Hungarian Academy of Sciences “Lendület” grant awarded to Attila Patocs (Lendület 2013), by the Hungarian Scientific Research Fund (OTKA, PD100648 (AP)) and by the Technology Innovation Fund, National Developmental Agency (KTIA-AIK-2012-12-1-0010). VKG participated in the design, laboratory work (FACS, expressional profiling and validation, Western blot), performed data analysis and interpretation and drafted the manuscript. EAT carried out FACS analysis and helped to draft the manuscript. KB performed Western blot analysis and helped to draft the manuscript. IL performed TLDA profiling and statistical analysis. OD performed statistical analysis on small RNA Seq data and helped to draft the manuscript. IK participated in the design and in Western blot analysis. JM participated in the design and FACS analysis and helped to draft the manuscript. KR participated in the design and helped to draft the manuscript. AP conceived of the study, participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.
2019-04-24T20:48:56Z
https://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-016-2747-6
A revised version of the draft Development Consent Order (Version 4) has been published. This was accepted at the discretion of the Examining Authority as an additional submission on 17 April 2019. We have missed the official deadline (yesterday!) to register as an interested party for the Norfolk Vanguard project. Equinor is the operator of the Dudgeon windfarm and as such, we have significant interest in the project - in land owned in Necton. And the Dudgeon cable that will be crossed by Vanguard project. Many thanks and I look forward to hearing back from you. I confirm receipt of your submission. As you are aware the opportunity to register and submit a relevant representation has passed. Your current status in the examination as a party with land affected by the development is an ‘affected person’. As such, you will automatically become registered as an Interested Party. Details of the land affected by the proposed development can be found in the applicants Book of Reference document. As this submission was received after the close of the relevant representation period (16 September 2018) it will be presented to the Examining Authority, who has discretion to accept the information into the examination as an additional submission. Details of the examination process can be found in our advice notes. You may find Advice Note 8.4: The Examination, to be useful as an overview of the examination process. I spoke to a Planning Inspectorate advisor this morning to apologise for Great Yarmouth BC missing last night’s deadline for submitting a registration of interest in the examination (due to an internal misunderstanding). She indicated that it would likely be acceptable for the Council to send its submission this morning, which I am extremely grateful for, and it is attached. Your current status in the examination is a ‘statutory party’ (a Local Authority bordering the Local Authorities in which the development is proposed aka an ‘a’ authority), as such you will not automatically become an Interested Party. However, as a statutory party you will receive an invitation to the Preliminary Meeting (PM) and a copy of the examination timetable. Following the PM, you will have a further opportunity to notify the ExA that you wish to be treated as an Interested Party. You will then receive all correspondence during the examination. Necton Parish Council query the adequacy of the applicant's pre-application community consultation. The enquirer submitted representations regarding the merits of the proposed development to the Planning Inspectorate. The application was submitted to Planning Inspectorate on 26 June 2018. Beginning on the day after it was submitted the Planning Inspectorate (on behalf of the Secretary of State) has 28 calendar days to decide whether the application can be accepted for examination. • Any Adequacy of Consultation Representations submitted by relevant local authorities. • The Planning Inspectorate’s Acceptance Checklist. a) The Consultation Report received with the application. b) Any Adequacy of Consultation Representations received by the Planning Inspectorate from a local authority consultee. c) The extent to which the Applicant has had regard to government guidance. Once registered as an Interested Party, you will be invited to: attend the Preliminary Meeting, attend any scheduled hearings, attend the Accompanied Site Inspection(s) and submit written representations. In addition to this, you can request that the Examining Authority holds an Open Floor Hearing where you can raise concerns about the project, which can include those set out in your email. I’ve just been on your web-site and cannot find any supporting documentation to the submitted DCO (i.e. the EIA etc). When do you expect have all these documents downloaded onto the web-site? The suite of application documents is currently in the process of being prepared ahead of publication as soon as practically possible. We anticipate that the full suite will be available to view on the project page early next week. However, if it’s published sooner, I will let you know. Rather than keep trying via the phone, here are the points that I wanted to discuss with you. I have converted them into questions below for ease. My questions do not seem to be answerable from the advice notes 8.3 or the faq on your website. Firstly, as you may be aware, I am contacting you on behalf of the N2RS Committee. My job is to deal with processes, timescales and procedures. With a membership of around 1000, N2RS is continuing to track the progress of the Vanguard Project. Our supporters cover mainly North Norfolk and include others from further afield. We are working co-operatively with local authority elected Members, our local MP Norman Lamb and landowner representatives. It is our intention to continue a close watching brief on the progress of the project and to campaign where necessary. We have several media platforms running (N2RS website, Facebook and Twitter). Q Roughly when will the time-limited period to register online as an Interested Party be given the proximity to the summer holiday season? Q If N2RS registers as a single organisation, how many core committee members will be allowed to attend the preliminary/subsequent meetings? Q Are there any objections to multiple applications from N2RS should that be necessary in order to have attendance at meetings? Q Roughly when can we expect the preliminary meeting to be scheduled? (we are asking because the written advice suggests that only about 3 weeks’ notice is given). Q Ditto the subsequent meeting(s). Thank you for your email, apologies we kept missing each other on the telephone. 1. Should the application be accepted for the Examination, the Applicant has a duty to advertise the Relevant Representations period and provide details about how to register to become an Interested Party. The registration period must be at least 28 days and the publicity notice will clearly tell you when the deadline is. The easiest way to become an Interested Party is to complete the form online via the project’s page of the National Infrastructure Planning website. Depending on the date of when the application is accepted (the Secretary of State has a deadline of 28 days from receipt of application to make this decision so if this is received at the end of June, the acceptance decision wouldn’t be made until the end of July). If accepted, the Applicant is then required to certify compliance and notify of the acceptance which is when they will provide the period for registering relevant representations. The Applicant may consider and impact of this over the holiday period and provide a longer period for registration but please note there is no requirement for them to do so. Some people will automatically become an Interested Party because they are landowners or have an interest in land affected by the application. This information will be provided in a Book of Reference supplied by the Applicant. They will be notified throughout the application process, but are also welcome to register via a Relevant Representation form so that their views can be made available to the Examining Authority at an early stage. 2. There is no limit on the number of N2RS core committee members that can attend the Preliminary Meeting or any subsequent hearings. However, in order to accommodate all Interested Parties and any members of the public wishing to attend the events, we would appreciate if you could inform us well in advance of those people who wish to attend. Details of this will be requested from anyone who has registered at the time. The agendas for all events will be published around seven days beforehand and that will allow N2RS members to decide if attending the hearing will be beneficial to them. 3. There is no requirement to provide multiple representations from N2RS in order to attend the hearings as if N2RS registers as an Interested Party, then it will be able to fully participate in the examination process: submit written representations and provide comments on all application documents and other documents submitted by other parties, respond to the Examining Authority’s written questions on behalf of N2RS. If members of the N2RS however wish to be individually recognised as an Interested Party in their own capacity then they will need to register a relevant representation separately which must include a summary of points which they agree and/or disagree with about the application, highlighting what they consider to be the main issues and impacts. 4. The formal notification of the Preliminary Meeting will be issued at least 21 days before its date, in a letter called the Rule 6 letter. The Preliminary Meeting marks the end of the formal pre-examination stage. However, that stage lasts around three months to allow the Applicant and the Inspectorate to fulfil their statutory duties. 5. The draft Examination timetable will also be issued in the Rule 6 letter and this will set out when the dates for the ExA’s procedural decisions, deadlines for written submissions etc, along with the dates of any hearings and site inspections scheduled to take place during the 6 month examination. The finalised Examination timetable will then be issued around 7 days after the close of the Preliminary Meeting. The agendas for individual hearings will be published nearer the time and usually around 7 days before the hearings. If you have not already done so, you might find it helpful to view the online videos we have available explaining the DCO application process. Please note, in accordance with Section 51 of the PA2008, a copy of your email and our advice will be published on the Norfolk Vanguard project’s webpage of the National Infrastructure Website. The Applicant is required to produce a consultation report detailing how they have complied with the consultation requirements of sections 42, 47 and 48 of the Planning Act 2008. This should be submitted with the application and will be checked, along with the other application documents, as part of the Acceptance process. This is a bit of a lateral question in a way, but something that puzzles us greatly. Can you tell us why in a lot of the Vattenfall documents (SOCC and PEIR for instance) the Secretary of State refers to the life of the projects as being 50 years? Vattenfall say 25 years. I hope you can help us understand. Thank you for your email. The SoCC and PEIR are the Applicant’s documents and therefore the Secretary of State would not have stated anything within these. I note from the Applicant’s PEIR that they state the design life of the project is 25 years. We have also published a Frequently Asked Questions document regarding Pre-application consultation and this can be viewed on our website here: [attachment 2]. Please find attached the report we commissioned from BLB Utilities on the alternative site we suggested to Vattenfall. This was paid for with donations from the residents of Necton. It has been send direct to Vattenfall, and will of course also go to other relevant bodies. Thank you for your email. As the project has not yet been submitted to the Planning Inspectorate (the Inspectorate), we have no formal powers to intervene on consultees behalf. I would therefore encourage you to contact the developer directly to make your concerns heard as the Applicant has a statutory duty to take your views into account. However, if you feel your comments are not being taken into account, I would advise you to write to your local authority and set out why you think the Applicant is failing to conduct its consultation properly. Your comments should be taken into account when the local authority sends the Inspectorate its comments on whether the Applicant has fulfilled its consultation duties. The local authority’s comments on the Applicant’s consultation will be taken into account when the Acceptance Inspector makes their decision whether to accept the application for Examination. Necton Substation Action Group - anon. I would like to add my name to what I believe is a growing list of people concerned with the potential adverse effects on the environment of Norfolk by the increasing amount of onshore infrastructure inflicted on the County for offshore windfarms. My personal concern being Vattenfalls Norfolk Vanguard and Norfolk Boreas offshore windfarms and the location of the onshore substation at Necton. I believe that Vattenfall, et al are riding roughshod over the Norfolk populace by hiding behind an outdated mandate used by the National Grid. This is resulting in, among other things, the ludicrous cable routing and crossing issues ensuing between Vattenfall and Dong near Salle. Plus the building of huge onshore substations in totally inappropriate locations. In addition I am very concerned that companies such as Vattenfall are being economical with the truth when explaining/indoctrinating local people on their consultations. Despite what is said, in usually patronising terms, local people have no real say in determining the final outcome of such consultations. Companies such as Vattenfall do them purely as a paper exercise because they are required to consult, without actually heeding too much to local opinion. To them we are a nuisance to be brushed aside. They are not doing these projects to be altruistic and save the world. They do them to make money. And by the way, Vattenfall, which is owned by the Swedish Government, also build coal fired power stations. Are they being selective in which parts of the world they want to save? Or is it all down to money? Thank you for your email. As the projects have not yet been submitted to the Planning Inspectorate (the Inspectorate), we have no formal powers to intervene on consultees behalf. I would therefore encourage you to contact the developer directly to make your concerns heard as the Applicant has a statutory duty to take your views into account. However, if you feel your comments are not being taken into account, I would advise you to write to your local authority and set out why you think the Applicant is failing to conduct its consultation properly. Your comments should be taken into account when the local authority sends the Inspectorate its comments on whether the Applicant has fulfilled its consultation duties. The local authority’s comments on the Applicant’s consultation will be taken into account when the Acceptance Inspector makes their decision whether to accept the applications for Examination. Below is a copy of my email to Vattenfall. Astonishingly, Necton and Ivy Todd residents haven't been given any graphical projection as to what we are really talking about in terms of mass and position. I would hope that we can look forward to this information. I am writing to ask if you would please supply us with aerial views, accurately plotting the positions of the 4 proposed sitings, to scale. So far, there haven't been any simulations to show the mass, the perspective from the villages, in order to gain an insight as to what is being proposed. We need graphics to be able to understand the actual impact the substations will have. Your photomontages show small distant views which do not serve the purpose of letting residents clearly see the true impact of the proposals. We need the views to represent Necton and Ivy Todd, with the inclusion of the properties which will be worst affected, and showing the substations in relation to the properties. I was astounded at the July presentation at Swaffham, that your photomontages were so inadequate, and were, what I felt to be less than truly representative and transparent. Thank you for the copy of your email to Vattenfall. As the project has not yet been submitted to the Planning Inspectorate (the Inspectorate), we have no formal powers to intervene on consultees behalf. You are doing what we would encourage you to do, which is to contact the developer directly to make your concerns heard as the Applicant has a statutory duty to take your views into account. However, if you feel your comments are not being taken into account, I would advise you to write to your local authority and set out why you think the Applicant is failing to conduct its consultation properly. Your comments should be taken into account when the local authority sends the Inspectorate its comments on whether the Applicant has fulfilled its consultation duties. The local authority’s comments on the Applicant’s consultation will be taken into account when the Acceptance Inspector makes their decision whether to accept the application for Examination. After the decision has been made regarding whether to accept the application for Examination all documents used to inform the decision will be published on our website. If the application for development consent is formally accepted you will be able to submit your views in relation to the project which will be considered by the Examining Authority during the Examination. The Inspectorate has published a series of advice notes which explain the Examination process, including information on how to get involved; of particular interest are advice notes 8.1 to 8.5. These are available at: [attachment 1]. I am contacting you on behalf of Holme Hale Parish Council in Norfolk who have an interest in the proposals coming forward from Vattenfall concerning the development of the Norfolk Vanguard Offshore windfarm. In particular, councillors are seeking information as to whether the parish council would be regarded as a statutory consultee for this project (as prescribed in Schedule 1 of the Infrastructure Planning Regulations 2009) with whom the developer has a duty to consult, as prescribed under s.42 of the Planning Act 2008? This development is a large scale project and the proposals are likely to have a significant impact upon the landowners and parishioners in the local communities like Holme Hale. As such, councillors in Holme Hale would wish to be included in the statutory consultation phase/process relating to the application from Norfolk Vanguard. Any help you can give in relation to this matter would be most appreciated. Thank you for your email. We can confirm that, on the information provided to us by Vattenfall when they submitted their Scoping Report to the Planning Inspectorate, Holme Hale Parish Council are a statutory consultee as prescribed in Schedule 1 of the Infrastructure Planning (Applications: Prescribed Forms and Procedure) Regulations 2009. You should have a received a letter from us, dated 5 October 2016, advising that the Secretary of State had identified Holme Hale Parish Council as a consultation body for the scoping opinion and inviting comments on the information to be provided in the Environmental Statement. However, regardless of the information above anyone, with an interest in the project, whether they are a statutory consultee or not, is encouraged to respond to the consultation. Could you please clarify for us the position of our Parish Council in respect to the above project? You already kindly said this in a previous email: “Parish Councils are one of the bodies we would expect the developer to engage with at the pre-application stage.” (their pre-application stage started 7th October and will run to 11th December – according to Norfolk County Council ). However, despite this our Parish Council are still saying “"when the planning application is presented, this Council will be in a position to deliberate and make decisions", a quote from their minutes which seems to indicate that they still believe that they are not allowed to make any comments on suitability or otherwise of sitings until after planning application is put it, by which time of course it will be too late. Could you be very kind and clarify things for us from your point of view? We understand from the Applicant that they have engaged fully with the local parish councils. Statutory consultation for the Norfolk Vanguard project will open on 7 November and run until 11 December 2017 and all members of the community, including parish councils, are encouraged to participate and respond to the consultation material provided by the Applicant. We also recently published a Frequently Asked Questions document regarding Pre-application consultation and this can be viewed on our website here: [attachment 2]. I came to **** in 1956 with my husband, the farm has been my life, providing work place and home, where we raised 3 children. My husband passed away in 2010 and I now live here with my son and we rely on farming our 80 acres for income, which is becoming more difficult to achieve due to changing markets etc. I am very upset, that without our agreement, this massive substation can violate the quality of my life by ruining the beautiful location of our home. We are looking into farm diversification ideas too, which rely on the peaceful, rural nature of where we live. Our house was flooded in 1982, as the adjacent stream could not contain excessive storm water. I am very concerned about increased flood risk, will there be more run off water from the substation than our stream can cope with? I am retired and enjoy the peace and tranquillity of my life here, and strongly object to Vattenfall building giant substations so close by. I have lived in Necton for over twenty years now and have always enjoyed village life and loved living with open views and rolling countryside. Vattenfall who also now want to invade our village of course say they are very aware of environmental issues, respect the wishes of residents, etc. are away from villages. We are not prepared to sit down and be bullied by these people who put profit before people and the environment. for industrial purposes - GREAT! Has anyone got the message yet - WE DON'T WANT ANY MORE SUBSTATIONS - WE'VE DONE OUR BIT. As the project has not yet been submitted to the Planning Inspectorate (the Inspectorate), we have no formal powers to intervene on consultees behalf. I note you have contacted the developer directly and we would encourage you to do this to make your concerns heard as the Applicant has a statutory duty to take your views into account. However, if you feel your comments are not being taken into account, I would advise you to write to your local authority and set out why you think the Applicant is failing to conduct its consultation properly. Your comments should be taken into account when the local authority sends the Inspectorate its comments on whether the Applicant has fulfilled its consultation duties. The local authority’s comments on the Applicant’s consultation will be taken into account when the Acceptance Inspector makes their decision whether to accept the application for Examination. After the decision has been made regarding whether to accept the application for Examination all documents used to inform the decision will be published on our website. If the application for development consent is formally accepted you will be able to submit your views in relation to the project which will be considered by the Examining Authority during the Examination. The Inspectorate has published a series of advice notes which explain the Examination process, including information on how to get involved; of particular interest are advice notes 8.1 to 8.5. These are available here: [attachment 1]. We are writing to all concerned regarding the proposed location for Vattenfalls Substation which they hope to site close to Necton Village. We are deeply concerned that Vattenfall are still hell bent on siting this monstrosity so close to a rural village. It is a quiet village consisting of a large percentage of retired and elderly residents who are distressed that their way of life will be ruined by siting this substation close to residential properties. There is also a primary school which will also be affected - who knows what impact this will have on young lives. We feel our complaint is justified especially as two other sites have been put forward for consideration that would have little or no impact on peoples lives and way of living. One site - 185 acres at Top Farm is available for purchase by the owner who contacted Vattenfall . Joe Hill had thought they were definitely buying it, but had then heard no more until VF announced their preferred site on the land that is NOT for sale on Necton Farm (close to Ivy Todd). Necton Farm is not for sale and would therefore have to be subject to a compulsory purchase. This seems madness ! Top farm would be an appropriate site because it has no flooding issues, it is closer to the pylon than Necton Farm would be. It also has natural landscaping and topography . The structures would not been seen from Necton, Holme Hale, West End or Ivy Todd. Other alternative sites would be near Scarning as proposed by Tony Smedley. One site would be close to the cable corridor at a crossing point on the A47. The other site being beyond this one down Watery Lane where there is land on either side of the road. This total lot being 165 acres which is for sale on the open market. Both sites at Scarning are sparsely populated and would have little or no impact on peoples lives. So why does Vattenfall refuse to consider these sites? Why also is Necton Parish Council only able to speak against the proposals once the planning applications have gone in. By that time it is too late for them to make any objections as the chosen site would by then be impossible to change. This to us seems extremely unfair and biased. This means that Vattenfall have no consideration for the lives of the people who live in Necton and the impact on them. Why are our human rights not being considered ??? We have sent various emails to Vattenfall in the past but have never had any reply from them. Only an automatic reply saying they will get back to us. It shows their contempt for the people of Necton. !! As the project has not yet been submitted to the Planning Inspectorate (the Inspectorate), we have no formal powers to intervene on consultees behalf. I note you have contacted the developer directly and we would encourage you to do this to make your concerns heard as the Applicant has a statutory duty to take your views, as well as any Parish Council’s views, into account. However, if you feel your comments are not being taken into account, I would advise you to write to your local authority and set out why you think the Applicant is failing to conduct its consultation properly. Your comments should be taken into account when the local authority sends the Inspectorate its comments on whether the Applicant has fulfilled its consultation duties. The local authority’s comments on the Applicant’s consultation will be taken into account when the Acceptance Inspector makes their decision whether to accept the application for Examination. This is a general question on Projects of National Importance and the Statutory Consultations that I hope you can help with, as to us people ‘on the ground’ these parts of the process are most puzzling and, a lot of the time, most infuriating. Can you tell us why the very people who should be available to represent the residents at this time, and those with the most knowledge of the village in general, and it’s needs, are effectively gagged by the consultation process? I’m talking about our Parish Councils, who are apparently, only able to speak for or against a proposal, once siting has been decided and planning permission has been applied for. It seems totally bizarre to us that our PC are not allowed to give an opinions, good or bad, on the various proposals being considered for planning applications. Because of course by the time planning applications go in, the site has been refined and chosen and would be, at that stage, almost impossible to change. Surely the PC, as our representative body, should be consulted, and allowed to represent Necton, before the final site is chosen, ie before planning is applied for, as afterwards it is far too late for them to have any useful input. Please can you help us with this, as it has caused terrible dissent and fracturing of the society of our village because so many people think that the PC is not responding by choice, and don’t understand that they are effectively gagged by the terms of the consultation process? Parish Councils are one of the bodies we would expect the developer to engage with at the pre-application stage. As advised previously, as the projects have not yet been submitted to the Planning Inspectorate we have no formal powers to intervene on consultees behalf therefore if the Parish Council has concerns about the consultation process then they should contact the developer directly to make their concerns heard as the Applicant has a statutory duty to take their views into account. If the Parish Council feels their views are not being taken into account I would advise them to contact the local authority. When does the inspectors work start officially? Is there a public announcement and can members of the public attend. Once an application has been submitted to the Planning Inspectorate we have 28 days to decide whether it is of a satisfactory standard to be examined. An Acceptance Inspector, along with the Case Team, will check the application against the statutory Acceptance tests under section 55 of the Planning Act 2008, on behalf of the Secretary of State. This process is not open to the public, however we will invite the host and neighbouring local authorities to submit a representation on the adequacy of consultation. We will also pass the Acceptance Inspector all correspondence that we have received in the pre-application period. Our website will be updated to advise the public that an application has been received and, if the applicant agrees, the application documents will be published on our website. I know that residents are not normally involved in the consultation process until the decisions have been made, and with my neighbours, I have been collating the regular lack of publicity and invitations to information sessions. I live a mere 200m from one of the proposed sites and with my neighbours, were not included in invitations to an information evening about the proposed “Cable Relay Station” which will have a huge impact and blight on our lives and businesses. A little while ago Vattenfall changed the dates of publication of the PEIA and thereby appear to have engineered a situation where the report is too late for discussion in Council meetings in November and too early for the meeting in December. In County Council speak; the result of this is that the consultation period (27/10 - 4/12) does not fit into any of the County Council’s scheduled Environmental Development and Transport Committee dates and will therefore be taken as an “urgent Decision”, i.e. decision to be taken by Chief Officer (Executive Director of Community and Environmental Services in consultation with the EDT Committee Chair and Vice Chair. This means Norfolk County Council is going to use delegated powers to process the statutory response to the inspectorate. As I see it, this tactic avoids public scrutiny via Cabinet. At the Cabinet meeting at NNDC last week where the Dong application was discussed many councillors only became aware of issues because well informed members of the public were able to speak. As mentioned above, I am really concerned that the publicity of the Vattenfall proposal has been limited and many people are just not yet aware of the huge scope of this project, or the choices available. One of my neighbours visited the building associated with the Sheringham Shoal offshore generation and described it as more akin to a nuclear power station and the constant hum being very intrusive. So, not only is Norfolk facing two/three huge infrastructure projects (with probably more to follow), but the dates of one of them is resulting in an absence of the normal public scrutiny………! I can’t believe that Norfolk residents are being hoodwinked in such a callous manner. Do you have any powers to ensure that there is full, open and proper public consultation on such an important issue? As the project has not yet been submitted to the Planning Inspectorate (the Inspectorate), we have no formal powers to intervene on consultees behalf. I would therefore encourage you to contact the developer directly to make your concerns heard as the Applicant has a statutory duty to take your views into account. However, if you feel your comments are not being taken into account, I would advise you to write to your local authority and set out why you think the Applicant is failing to conduct its consultation properly. The Norfolk Vanguard draft Statement of Community Consultation is currently with Norfolk County Council for consultation. This document sets out how Vattenfall will conduct their statutory consultation and prior to finalisation of their draft it was sent to Norfolk County Council for their feedback. Therefore they have an opportunity to comment on the proposed dates of public consultation. The statutory consultation period will not be the only opportunity for Norfolk County Council to comment on the proposals. Once the applications are submitted to the Inspectorate the Council will be invited to comment on whether the Applicant has fulfilled its consultation duties. The local authority’s comments on the Applicant’s consultation will be taken into account when the Acceptance Inspector makes their decision whether to accept the application for Examination. Local authorities have a very important role in the 2008 Act process. If the Norfolk Vanguard project is accepted for Examination, the Examining Authority will invite Norfolk County Council to submit a Local Impact Report (LIR), which can give details of the likely impact of the proposed development on the authority’s area. In coming to a decision on whether or not to grant consent for the project, the Secretary of State must have regard to any LIRs that are submitted by the deadline. Norfolk County Council will also have an opportunity to submit written representations and make oral representations at hearings. There is a lot of deep mistrust of Vattenfall's reference to the standards/guidelines relating to visuals/graphics being used to 'demonstrate' the impact of the Norfolk Vanguard on shore installations. We were told that guidance was being adhered to. We believe they are referring to Scottish guidelines. Can you help us out here? - The Landscape Institute (2011). Landscape Institute Advice Note 01/11, Photography and photomontage in landscape and visual impact assessment. I am a member of the Necton Substations Action Group and have found your contact details from the group, as I can't find a link to register my interest with the Planning Inspectorate. I am extremely concerned about the proposed sittings of Vattenfall's Norfolk Vanguard and Norfolk Boreas, and the National Grid Extension, at Necton, Norfolk. I strongly believe Necton is the wrong area for these Nationally Significant infrastructure Projects and attach my thoughts, reasoning and and opinions. Please can these be looked at and taken into consideration at this crucial pre application stage? After the decision has been made regarding whether to accept the application for Examination all documents used to inform the decision will be published on our website. If the application for development consent is formally accepted you will be able to submit your views in relation to the project which will be considered by the Examining Authority during the Examination. The Inspectorate has published a series of advice notes which explain the Examination process, including information on how to get involved; of particular interest are advice notes 8.1 to 8.5. These are available at: [attachment 1]. I live at Fox Hill Ruston Norwich Norfolk. I recently learned that a proposed cable relay station may be built very close to where I live. Please could you give me as much information as possible! On how this is possible that a company can propose to do this , this sight is so beautiful! Or is this a case of it will happen any way because the alternative is too expensive, eating into profit. I am all for greener energy! Wind solar ! But I do not want to see this beautiful part of Norfolk trashed ! Your comments may relate to the proposals for the Norfolk Vanguard offshore wind farm project. Once the project has been accepted by the Planning Inspectorate (the Inspectorate) you will have an opportunity to register your interest in the application and make representations to us. However the project has not yet been submitted to the Inspectorate and therefore I would encourage you to contact the developer directly to make your concerns heard as the Applicant has a statutory duty to take your views into account. The developer, Vattenfall, can be contacted by email [email protected] or phone 01603 567995. (In your previous response) you refer to my local council. To date, having emailed Mr Richard Price (N Norfolk District Council) I have received no reply. I wish to register the ambivalence shown by the local council and the ambivalence of Norman Lamb MP who wrote a general letter taking no position on the issues other than platitudes. To whom does one now refer my concerns as a citizen? As the project has not yet been submitted to the Planning Inspectorate (the Inspectorate), we have no formal powers to intervene on consultees behalf. I would therefore encourage you to contact the developer directly to make your concerns heard as the Applicant has a statutory duty to take your views into account. I note that you have already contacted your local authority however, if you feel your views are not being taken into account by the developer I would advise you to write to your local authority again and set out why you think the Applicant is failing to conduct its consultation properly. Your comments should be taken into account when the local authority sends the Inspectorate its comments on whether the Applicant has fulfilled its consultation duties. The local authority’s comments on the Applicant’s consultation will be taken into account when the Acceptance Inspector makes their decision whether to accept the application for Examination. After the decision has been made regarding whether to accept the application for Examination, your email will be published on our website together with all documents used to inform the decision. I am a member of the community affected by the onshore part of this project. I am involved in a local group which has serious concerns about the project and in particular about the informal consultation process now underway. If you have concerns about the consultation I would advise, in the first instance, you speak to your local council. I understand that they will shortly/ are currently reviewing the draft Statement of Community Consultation which sets out how the applicant intends to undertake their statutory consultation. Any concerns you may have may then be fed back, via your local council, to the applicant. It has come to my notice that the deadline for Consultation Bodies to respond to the above is 6th June. What constitutes a Consultation Body? We are a new action group set up to campaign against cable relay stations in unspoilt countryside. Since Vattenfall’s public consultation so far has been flawed, in terms of reach and transparency, we are playing ‘catch up’ with the planning process. It is not yet clear how we can make our voice be properly heard and I’m not sure whether this imminent deadline should have applied to us. Please could you advise whether we should be consulting with you and if so on what basis? • if the land to which the application, or proposed application, relates or any part of that land is in Greater London, the Greater London Authority. The action group you describe below would not be considered by the Planning Inspectorate to be a consultation body for the purposes of scoping. At the pre-application stage, we would therefore encourage you to contact the developer directly should you have any comments to make on the proposed development. Information on how you can be involved in the Planning Act 2008 process is contained within our Advice note 8 series which is also available at the link above.
2019-04-24T20:53:04Z
https://infrastructure.planninginspectorate.gov.uk/projects/eastern/norfolk-vanguard/?ipcsection=advice
So many small business owners dread tax season. Not only does it mean extra work on your plate, the fear of getting audited if you make a mistake is really stressful. But there’s a different way to think about tax season. All the extra work you’re doing at this time of year can actually teach you a lot about your business, and help you plan better for the year to come. To help you prepare your books properly for your accountant so that they can help you save as much money as possible. To demystify audits so you can prevent them, handle them when they happen, and stop fearing them. To teach you how to stay on top of your books year-round, so you won’t procrastinate and panic at tax time anymore. Tax season can be stressful—especially for small businesses. But it’s also a huge opportunity for entrepreneurs to dig deep into their business’ finances and performance and set yourself up for success in the coming year. Business tax deductions are a big part of that because they can save valuable funds you can reinvest to grow your business. According to the United States Internal Revenue Service (IRS), business tax deductions for 2015 totaled over $1.1 trillion. That’s one big opportunity. Understanding the tax deductions your business is eligible for goes a long way in ensuring you save every dollar possible. Even if you hand off tax preparation to a professional, it’s important to know your deductions so you can prepare and keep the appropriate records to claim them. Common mistakes to avoid when you deduct business expenses. If you need a quick tune-up on common tax terms to know before diving in, check out our glossary of tax terms. In a nutshell, tax deductions (also called write-offs) are one way taxpayers can lower their tax liability or the amount of tax they pay. When you prepare and file your taxes, you claim the deductions your business qualifies for on your annual tax return. Deductions come in all varieties, but they have one thing in common—they count against and reduce your total taxable income. That’s different from a tax credit, which counts dollar-for-dollar against your tax liability for the year. For example, if your business income for last year was $100,000 and you claim $20,000 in write-offs, your taxable income is $80,000. Your savings from those deductions are the total deduction amount ($20,000) multiplied by the tax rate for your income bracket. If your rate is 25%, for example, those deductions would save you $5,000 on your taxes for that year. Independent contractors, freelancers, and sole proprietors are all considered self-employed workers in the eyes of the IRS. What does that mean? At the most basic level, being self-employed means you don’t report to someone above you. From a tax perspective, self-employed workers typically pay quarterly estimated taxes that cover income tax and the additional self-employment tax. For an individual, those taxes can add up in a hurry. That’s why it’s important to understand the six deductions we’ll cover next—so you can be ready to claim them and lessen the burden once tax seasons rolls around. The home office tax deduction is probably one of the most well-known and least understood deductions available to self-employed people. In a nutshell, this deduction is aimed at giving you credit for expenses associated with maintaining an office in your home. It can be a substantial annual deduction, so it’s a wonder why more self-employed workers don’t claim it. Previously, home office deductions were less common, making them somewhat of a lightning rod for IRS audits. But as remote work and the gig economy has grown, home office deductions have become more routine for the IRS—meaning the threat of an audit has dropped off substantially. Not to mention, beginning in the 2013 tax year, the IRS instituted a standard rate for home office deductions, creating a much faster and simpler method for calculating your deduction amount. The rate for 2017 was $5 per square foot of office space. The space has to be used to conduct business regularly and exclusively, meaning you can’t claim an entire room if it actually doubles as the guest room. Your home office needs to be your primary place of business. If you head to the coffee shop from time to time, that’s okay. But if you rent a desk in a coworking space for 25 days out of the month, your case for the deduction is a little weaker. If you rent an office space outside your home, see the Rent and utilities section under small business deductions. There are some aspects of running a business that are better handled by experts. When you work with a professional to handle something (like an accountant to file your taxes or a lawyer to incorporate your business), you can deduct the cost of their help. It’s important to note: you can only deduct professional fees that are directly related to your business. For example, if you hire an accountant to file both your personal and business taxes, you can deduct only the cost of your business tax filing. For some self-employed workers, health insurance, retirement savings, and other benefits you’d otherwise receive from an employer can easily become your biggest expenses. That’s why are also typically eligible as deductions from your taxes. The most significant (and frequently evolving) deduction is for health insurance premiums. Your business income for the year was less than your health insurance premiums (if you reported a net loss). In addition to health insurance and retirement contributions, you can also deduct other common types of insurance you may need as a solopreneur—including professional liability insurance, disability insurance, and home-based business insurance. If you pursue additional education to either maintain or improve skills that relate directly to your business or your legal ability to continue in the field, you may be able to deduct the cost of that education. For example, an SEO consultant can deduct the cost of a course on what’s new for SEO in 2018. The IRS Publication 970 offers more guidance on expenses that do and do not qualify. However, this deduction is one that gets pretty specific, so we recommend working with a tax professional to see if you’re eligible. For self-employed workers, the education deduction actually lowers your taxable income (instead of being credited against your tax liability), so it’s well-worth taking if your expenses qualify. Note: All taxpayers are also eligible to deduct any interest paid against your student loans. You should receive a 1098-T form from the lender, which includes the total interest you can deduct. A family vacation to Maui unfortunately does not qualify for a business tax deduction. But any travel you do to meet with or acquire clients, perform services or deliver products, and attend conferences, seminars, and other education or networking events is a deductible travel expense. Transportation (airfare, train fare, bus fare, Uber/Lyft/taxi fare, rental car, and auto mileage)Travel by airplane, train, bus or car between your home and your business destination. The key to appropriately deducting business travel expenses is to be reasonable. The IRS is vigilant about ensuring your deductions are legitimate business necessities. That means indulging in first class airfare or trying to deduct a 10-day family vacation where you met with 1 client won’t fly. Keep accurate and plentiful records and use a degree of reasonableness, and your deductions won’t raise any red flags. Self-employment means you’re accepting payments from clients or customers. Depending on the payment methods you accept—and the tools you use to process them—you’re responsible for merchant processing and service fees on those payments. For example, typical credit card processors charge between 2.5% – 4% of the transaction amount. Those fees can definitely add up throughout the year, so it’s important to keep a record of every transaction. Those records enable you to deduct merchant processing fees from your business income. Small businesses represent the vast majority of firms in the United States. They drive job creation and economic growth—they also contribute a lot to annual tax revenue. Because small business and entrepreneurship is such a vital part of our economy, there are several tax deductions that can help lessen the burden on small businesses. In fact, many different business expenses can be deducted from your business taxes, including rent and utilities for your office and even invoices and bills that go unpaid. Whether you rent a physical office space for 50 or one desk in the coworking space downtown, both your rent and any utilities for the office are deductible business expenses. Utilities include: electricity, gas, water, telephone, and internet bills. If you work out of your home you can still deduct some of these expenses as they relate to your business use of the space. See the Home office deduction section above for more. For the bigger ticket equipment, you can choose to deduct the full value in the same year you buy it or spread the cost out over a number of years (depending on the type of item). Self-employed workers can deduct their own benefit and insurance premiums, and the same applies to small business owners. If you have employees, you can also deduct their salaries (including wages and bonuses) and any benefits you provide to them and their families. Everything from business cards to Facebook ads to billboards counts toward this deduction, so be sure to keep records on all your advertising expenses throughout the year. As your small business grows, there are several different types of insurance you’ll need—from professional liability insurance to workers’ compensation and product liability insurance. The premiums for any insurance policies your business needs are deductible, similar to your health insurance deductible. Understanding the tax deductions that are available to you and your business is the first step in winning the year-end season. When you make the most of the deductions you qualify for, you’re lowering your tax burden and saving money—money you can put right back into your business (or money you can use to fund a little break from your business). But there are a few common mistakes both self-employed workers and small businesses fall into, and they can end up costing you. Here are some of the mistakes we see with tax deductions—keep them in mind so you can avoid falling into these traps. No matter which business tax deductions you claim on your taxes, they all have one thing in common: you need proof. Deductions can be lucrative, so there are always people who erroneously claim tax credits and deductions they don’t qualify for. If your business is audited by the IRS, you’ll need documentation to back up every deduction you claim. That’s why it’s absolutely vital that you always document and record all of your business’ expenses, especially those you claim on your tax return. You file taxes once a year—which means you have to remember and keep track of documentation for a good while before it comes in handy. You have a lot on your plate, and we know it’s easy for invoices to disappear and receipts to fall through the cracks. That’s why we always recommend having a system in place to organize and manage your expenses and receipts. If you’re already using Wave, you can upload and categorize receipts. You can also sync your business bank account to automatically import expenses. That average small business or solopreneur qualifies for several tax deductions, and it’s in your best interest to claim every deduction you qualify for. Some deductions (like the home office deduction or equipment depreciation) require more complex calculations to figure out the actual amount of your deduction. But the bottom line is, if you forego deductions you qualify for, you’re losing out. A lot goes into preparing and filing your annual business taxes. It’s a complex topic—one that changes frequently, too. While some of us can and do handle personal taxes by ourselves (or with the help of DIY software, at least), business taxes are a horse of an entirely different color. It’s important for business owners need to understand all that business taxes entail, but there’s a lot at stake when it comes time to actually file. From mistakes that trigger an audit to leaving money on the table, your business is better off when a tax professional handles things. And now that you know the fees you pay for a professional accountant are tax deductible, there’s no reason to make this mistake. In the world of taxes, audit is a 4-letter word. Taxpayers live in fear of the IRS audit—including the inconvenience and potential penalties it implies. But you don’t need to fear the audit, and you definitely shouldn’t let fear of the audit stop you from claiming deductions you’ve rightly earned. There are two main defenses against an IRS audit: comprehensive records and professional tax filing. If you claim only the deductions you qualify for and you have documentation to back you up, you’re in good shape even if the IRS does choose to audit your return. Since we’ve already talked about both of those defenses, your audits fears can float right out the window. Most small business owners and solopreneurs don’t get excited about taxes—but deductions are one thing you should get excited about. After all, they’re all about scoring you valuable tax savings you can use to grow your business. If you have a solid understanding of the business deductions you may qualify for and seek professional tax help, you’ll be off and spending those savings in no time. Want to test your knowledge for this guide? Try out our tax quiz on deductions: Can I Claim This? For small business owners and independent workers, the end of the year means closing your books and taking stock of your business. Wrapping up your annual accounting is about more than just preparing for taxes—it can be a powerful opportunity to take an in-depth look at your business and finances and find opportunities to continuously improve year over year. Did your revenue grow over the last year? What can you do to continue that growth into the new year? How can you improve cash flow? Asking these questions is a vital exercise for business owners, but it can easily get lost at the end of year. Between reconciling your books, getting a handle on tax documents, and planning for the year ahead, there’s a lot to do. Throw in holiday planning and your schedule can get downright oppressive. That’s why it’s best to prepare for year end accounting ahead of time and plan for a smooth end to the year. Key questions to ask your accountant. Accountants can sometimes seem like magicians—able to make sense of spreadsheets and numbers that read like hieroglyphics to the rest of us. But there’s one thing that can stand in the way of even the most magical and talented accountant: unfinished and disorganized bookkeeping. If your business and financial records are a mess, it’s a lot harder (and maybe impossible) for an accountant to make sense of them. That’s why your first step, before you knock on your accountant’s door, is to get all of your books and records in order and up to date. Once all of your transactions have been entered and verified, the next step is to reconcile your accounts. Simply put, run through to ensure all entries in your bank and credit card statements are included and correct in your accounting ledger—and vice versa. Reconciling is an important part of making sure you aren’t missing information or working with inaccurate numbers. Now that you know everything matches up, you’ll make your year end period adjustments. These adjustments help you account for things like bad debts and depreciation on your assets. Period adjustments are a way for you to tie up the relationship between your costs and the revenue they generate. That makes for a better, more accurate and holistic picture of your business finances and performance—one you can act and make decisions based on. Here are a few of the period adjustments you may need to make. Sad as it is, unpaid invoices are a reality of the freelance and small business worlds. When you create an invoice, your books show that amount as income. If the income never actually materializes (i.e. the customers just doesn’t pay), you need to adjust your books to write off those invoices. For Wave users, here’s a guide for how to do that. When you buy certain equipment or other necessities for your business, they add to your business assets. But equipment like computers or vehicles depreciates over time and use—meaning the value of your assets goes down. You need to record that change in value to get a more accurate picture of your business’ assets. Depreciation adjustments can be a little complicated, so we recommend working together with your accountant to decide on the best depreciation approach for your business, and how to record it on your books. Accrued revenue and expenses are income and liabilities you’ve earned or used, but haven’t yet paid or been paid for. For example, if you complete a client project on December 30th and don’t have a chance to invoice for it until January 3rd, you still want that revenue to be credited for the year in which you earned it. The same is true for expenses that you use in one year and pay for the next—like your business credit card bill for December, which you’ll pay in January. On the flipside of accrued revenue and expenses, there are some expenses you pay for before you use them. Consider expenses like internet service. Your bill comes in December and covers your service for the upcoming month of January. If you make the payment before the year ends, you’ve prepaid for an expense in the following year. Recording this prepayment helps keep your expenses and the revenue they generate matched up. Whenever a customer or client pays all or part of your invoice before you actually complete the work, they’ve made a deposit. That money isn’t technically income until you complete the work or ship the product, even though it’s sitting in your bank account. At year’s end, you’ll record any customer deposits as business liabilities because you’re obligated to do the work. Now that your books are all in order, let’s talk about the documents, records, and reports you need to bring along when you meet with your accountant. Various documents and reports can help your accountant get the best picture of your business and its financial health—not to mention helping them prepare for tax time. When you meet at the end of the year, it’s always best to bring your tax return from the previous year. Your return has key information about your income, expenses, and the deductions you took that can help you compare how your financial situation and business performance has changed over the last year. Let’s talk about what each of those involves. Your accounting information and basic financial reports work together to give your accountant a more holistic picture of your business finances over the past year. They can tell them things like how healthy your cash flow was, how much revenue you brought in, and whether you had a profit or loss this year. In addition to those official financial reports, we also recommend bringing along a copy of your trial balance at year end and your general ledger. Remember, the more information your accountant has, the better advice they can give. Your financial reports give your accountant a good overview, but it’s also helpful to drill down into some of the details around where and why your business is spending the money you’re spending. That’s why comprehensive records of all your expenses for the year can be so helpful. All of this information helps you do a few things. For one, it helps you account for and justify everything you spent business funds on. Looking at your business expenses also helps you figure out how they correlate with revenue. That makes it easy to find patterns, identify overspending, and see where you can cut down on expenses for next year. Many of us are far from ready to file our taxes when we go for year end accounting help. Still, it’s a good idea to bring any tax documents and forms you do have ready. Your tax forms are an overview of your financial year—not unlike your business financial reports. They can be helpful in confirming things like self-employment income, spending on contractors, and payroll expenses. Few business owners fall in love with the accounting side of running a business. If your idea of a great year end means handing off paperwork to your accountant and letting them handle the rest, you’re not alone. You don’t need to have a CPA to have a grasp on your business’ finances, though—and understanding your books can only help your business thrive. That knowledge starts with getting the skinny from your accountant. Ask the questions below (and any others you have) to better understand the state of your business finances and what you can do to improve them. How can I optimize my cash flow? Cash flow is a particularly tough problem for contractors who work on a project-to-project basis and seasonal businesses. No matter how much revenue your business brings in, cash flow is something a lot of businesses, large and small, struggle with. It’s one thing to turn a profit at the end of the year—it’s quite another thing to maintain a healthy cash flow throughout the year. Healthy cash flow ensures you can cover your bills and expenses on time and that you won’t go belly up if an unexpected expense pops up. That’s why it’s good practice to look for ways to optimize and improve your cash flow every year, even if you haven’t had a crisis of cash. Is the legal structure of my business still the best option for me? Whether your business operates as a sole proprietorship, partnership, limited liability company (LLC), or corporation, there are a lot of benefits and disadvantages to each legal structure. Even if you worked with a professional accountant to decide on the right structure when you launched, this could change as your business grows and evolves. That’s why it’s an important conversation to have with your accountant every year—to ensure you’re putting your business in the best possible position. How and where can I grow my profits next year? For most businesses, you grow by increasing profit. Boosting profit comes down to two things: growing revenue or decreasing expenses—and your accountant can help you explore both avenues. How will recent tax law changes affect my taxes? Tax laws evolve constantly, but 2018’s complete overhaul spells a lot of change—particularly for pass-through businesses like sole proprietors and LLCs. It’s important to understand how those changes are likely to affect your 2018 taxes. Your accountant can also help identify new ways to lower your tax liability under these new laws, like taking advantage of new or different deductions or even changing your business’ legal structure. How should I estimate quarterly tax payments next year? Entrepreneurs and small business owners who don’t have taxes withheld from each paycheck are expected to pay estimated taxes each quarter. As the name implies, these are an estimation based on what your total income is expected to be—meaning they can change a lot from year to year. It’s important to take a look at your total income for the year, as well as how it matches up with your estimates throughout each quarter. This is the information that will help inform your estimated payments for the following year. There are a million and one things we could recommend talking with your accountant about. At the end of the day, it all comes down to the information you want and need from them. Your accountant is more than the person who keeps your books or files your taxes. They’re experienced professionals with a lot of expertise and advice to offer your business. They’ve seen all kinds of businesses and financial situations. So take advantage of that and don’t be afraid to ask every burning, nagging question you have—your accountant is there to help. Completing your year end accounting and preparing to close your books is all about wrapping the year up in a nice little bow—then mining your business information for ways you can grow and improve in the new year. Year end offers key insights that can help you increase revenue, develop a healthier cash flow, save on taxes, and a whole host of other things. As each year draws to an end, there’s one thing on a lot of minds: taxes. For small businesses and independent workers, a lot goes into filing your taxes. From parsing all the tax forms you send and receive to figuring out your estimated quarterly taxes for next year, tax season is a busy time and it’s easy for April 15th to sneak up on you. That’s why it’s always a good idea to get a headstart on tax filing as soon as the year ends. Once your annual return is out of the way, you can relax and focus on more exciting things like growing your business. How to actually file and pay your taxes. That being said, we always recommend that you work with a professional to prepare your business tax return. Personal taxes are one thing, but when it comes to your business, hiring a professional accountant or CPA is a no-brainer. Tax pros do this stuff for a living—they can ensure your T’s are crossed and your I’s are dotted. Peace of mind and a maximized refund are both worth a lot more than the cost of hiring a CPA. Here are a few other compelling reasons to work with a pro this year. Self-employed taxpayers can have hundreds of 1099-MISC forms to sift through. Business owners might have dozens of different deductions, each requiring its own set of documentation. Not to mention: math. Between the complexity of self-employment and business taxes and what’s at stake if you miss something, hiring a tax pro to handle your prep and filing is a no-brainer for entrepreneurs and businesses. If you’re thinking that a good tax software can handle all of that complexity for you, you’re right…and you’re wrong. Popular tax software options do make it much easier to get all the right information and documentation in the right places. That being said, tax software can’t maximize your refund the way a human CPA can. Humans have years or even decades of experience preparing taxes and working with businesses like yours. They can find deductions you didn’t even know you qualify for and identify other ways to lower your tax liability. Humans understand the nuance and loopholes riddled throughout U.S. tax code—better than software can. On top of all that experience, human tax preparers are also more flexible. They can adapt to new changes in tax laws, like the massive overhaul that is the 2017 Tax Cuts and Jobs Act. Even if you’ve filed your own business taxes before—even twenty times before—the rules are different this year. The new legislation made big changes to things like the standard deduction and how much pass-through income is taxed. Changes that affect everyone from sole proprietors to partnerships to limited liability companies (LLCs). The new laws don’t just change how much you owe, they can also change the way you maximize your refund. Maybe you’re better off taking the standard deduction instead of itemizing this year. Maybe incorporating is best for your business now. A professional tax accountant can parse all the new laws and their implications—and offer you sage advice on the best options for your business moving forward. Speaking of advice, a professional CPA or accountant can help you figure out ways to lower your total tax liability. Everything from the legal structure of your business to whether you work with independent contractors to how much you pay in quarterly taxes affects your ultimate tax refund or payment. A lower tax liability means more money that goes back into your business (or maybe funds some much-needed PTO). Tax pros can also help you set the right expectations for your business taxes. Freelancer Lindsey Peacock talks about a vital mistake she made during her first year as a freelancer—and how it ended up costing her $6,000. A professional business accountant could’ve saved Lindsey the stress of an unexpected bill by giving her the right information, upfront, about how much she should expect to pay in self-employment taxes. Now that we’ve convinced you to work with a professional tax accountant when you file, let’s talk about how and where to find a good one. Google something like “small business accountant,” you’ll drown in results. 117 million of them, in fact. How do you filter through that many options and actually find someone you want to work with? To start, you have to know what to look for. Once you know what makes a good tax filing partner, we’ll talk about a few places where you can find a more curated list of options. In the world of business taxes, not all accountants are created equal. Some may be more experienced, more talented, or just a better fit for your particular business—and it’s important that you know how to find the best tax pro for you. To start, you definitely want to work with a registered accounting professional who’s certified to prepare and file your taxes. That might be a Certified Public Accountant (CPA) or Enrolled Agent (EA). Either way, you can be sure that certified tax preparers know their stuff, and you can trust their experience. To verify an accountant’s credentials, check out CPA Verify or the American Institute of Certified Public Accounts (AICPA’s) website. Once you confirm an accountant is certified, there are a few other, more subjective things to consider before working with them. Specialty and expertise: Just like your business serves a particular niche, so do tax accountants. Your best bet is to look for a CPA or EA who has plenty of experience working with small businesses—and your type of business (sole proprietorship, freelancer, LLC, etc.), too. Experience in your industry: On top of small business expertise, it can help to work with an accountant who has experience in your industry, as well—particularly if your industry has its own unique tax challenges (like ecommerce or real estate). Reviews and testimonials: Knowledge and experience are one thing, but how will the tax pro put them to work for you? A good accountant should be able to provide reviews or testimonials from current and past clients. If they don’t have any, that could be a red flag. Rates and pricing: When it comes to accounting and tax prep fees, you’ll need to do some research on common rates—both in your local area and for your business situation. Beware of rates that seem too good to be true. Easy to work with: Working with your accountant shouldn’t be a painful experience. Look for a pro who’s communicative, friendly, and helpful. Now that you know what makes up a good business tax accountant, let’s talk about where to find potential accountants. As we said before, searching Google for a small business accountant is a one-way ticket to overwhelm. The easiest way to sift through your options is to start with an already curated list from the beginning. Where can you find a curated list of qualified accountants and CPAs? Start with your existing network. Ask for referrals from friends, family, and colleagues who own a business or work a side job. That’s the best way to go straight to the most qualified accountants, and your network can give you all the details about what it’s actually like to work with them. If you don’t find a match through your network, there are several business organizations that should be able to give you a list of certified and local tax professionals. Look for your local Chamber of Commerce or a nearby Small Business Development Center. Last but not least, try a review website like Yelp. While you’ll get a lot of options to sort through, you also have easy access to details that can help you narrow down the right fit for you—like ratings, reviews, responsiveness, and rates. Once you found a small business accountant you can’t wait to work with, it’s time to start gathering up all the documents and other records they’ll need to prepare your taxes. For small businesses and independent workers, those documents can add up quickly. The average employee just needs to bring the one W-2 form their employer delivered to them. But for the self-employed, sifting through your income, tax paid, and expenses involves quite a few more pieces of paper. Our list is a good starting point, but it isn’t comprehensive for every situation. To be safe, give your accountant a call before you head to their office so you can be sure you have everything they need. Now that your tax return is ready to rock, how do you actually file your taxes? Here’s the short version: if you work with a tax professional or CPA, they can file all your tax forms for you. Typically, filing your taxes is included in the normal fee you pay. The same goes for tax prep software like H&R Block or TaxAct—e-filing your taxes is included in the price. If you decide to go it alone and file your own taxes, you have two choices. The quickest and easiest option is to e-file (for free!) through Free File from the IRS. For the paper lovers among us, you can also still choose to file by mail. You can find the right mailing address on the IRS website or look for the instructions on the last page of your 1040 form. When you’re ready to pay any balance owed on your taxes, there a lot of options for how to pay. Electronic Funds Withdrawal: If you work with a professional tax preparer or tax software, they’ll submit your payment through this method when they file your taxes. If you file yourself using Free File, you can do the same and schedule your payment for anytime up until the due date. You can find the right mailing address on the IRS website or listed in the instructions on your tax forms. To pay at your bank, fill out the Same-Day Taxpayer Worksheet and bring it into your bank’s nearest location. Don’t forget: wire transfers usually incur a fee from your bank, so it’s important to find out how much the fee will be before you decide to pay this way. To pay with cash, you’ll need to visit the Official Payments website to verify your information. Once the IRS confirms your information, you’ll get an email with your payment code, location to pay, and other instructions. Cash payments incur a $3.99 fee. Taxes aren’t the most exciting part of running your own business, but they are an important one nonetheless. By gathering your information together and getting an early start as the year ends, you’ll be in good shape to find the right accountant and maximize your refund—so you can kick back and relax as the tax deadline approaches. As a small business owner, you go to great lengths to make sure you’re complying with all the necessary tax laws and requirements. You save up and organize all your receipts, sort through dozens of 1099 forms, and make payments on time. So when the government decides to audit your tax return, it can feel a little scary. Here’s the thing: you don’t need to be afraid of an IRS audit. For starters, audits are relatively rare when you consider the sheer number of tax returns the IRS handles. Business News Daily notes that audits make up less than 1% of the 150 million returns filed each year. What you’ll need to give the auditor. Then we’ll finish off with a story about why you really truly don’t need to sweat an audit. Let’s get to it! What does it mean to get audited by the IRS? There are a lot of reasons the IRS might audit your business tax return—and very few of them mean the government thinks you committed criminal tax fraud. Remember: an audit isn’t an accusation. It just means the IRS wants to take a second look at your taxes to ensure everything is accurate, and fix anything that isn’t. The key to getting through an IRS audit unscathed is to be prepared for it (more on that later!) It can help to understand why the government might audit your business, so you can anticipate an audit and take steps to avoid it. That being said, the IRS also conducts random audits that can happen even if none of the red-flags above apply to you or your business. That’s why it’s important to always be prepared with documentation to back up your income, expenses, and deductions. Keep your business and personal finances separate: This is a good tip to follow even if all it ever does is simplify your bookkeeping. Keeping your finances separate makes it easier to organize and keep track of your business income and expenses,so you can prove your income, expenses, and deductions in the case of an audit. Track every penny: In same vein, it’s important to keep track of every penny that flows into and out of your business. Not only does this enable you to claim all the deductions you qualify for, but it makes for concrete documentation of your business income and expenses, which comes in handy when an audit questions their veracity. Stay organized throughout the year: One of the most common mistakes small business owners make is failing to save and organize their business receipts, invoices, and reports throughout the year. Work with a tax professional to prepare and file: Back in Chapter 3, we talked about the importance of working with a professional accountant or CPA to file your taxes. An added benefit of working with a professional is having a partner to help you through the audit process and work with the IRS on your behalf. No matter how prepared you are for a tax audit, it’s normal to feel a little panicky when an audit notice arrives in the mail. For starters, just take a breath and remind yourself that you’ve prepared for this. After that, these simple steps are the best way to get started responding to and working through an audit. Verifying the authenticity of your audit notice might seem like a wasted first step, but it’s actually really important. The IRS is a favorite of scammers because they know people are likely to react quickly and without too much thought because we’re all secretly terrified of the IRS. Don’t be that person—there are a few obvious calling cards of an audit scam. The biggest red flag has to do with the medium of the notice. The IRS, for security reasons, will only ever initiate contact via a mailed letter to your home or business address. If you get a phone call, voicemail, or email about an audit, it’s a fake. If you’re ever unsure whether an audit notice is legitimate, your best bet is to contact the IRS directly. You can call the agency at 800-829-1040 or visit their website for help confirming the letter is for real. Once you know the audit notice is legitimate, your very next step should be to contact your accountant or CPA. Whether they prepared and filed your tax return or you did it yourself, it’s absolutely vital to have a professional on your side during the audit process. For one, your tax pro may have been through an audit before. They’ll know how to handle one and what to expect. They can also help you decipher why you’re being audited—if it’s an actual mistake or just a routine audit to verify the information in your return. When it comes to working with the IRS, it’s always best to have a professional in your corner—because the IRS auditor is a professional, and their job is to get all the tax money the government has a claim to. Many small business owners choose to sign over power-of-attorney to their accountant or CPA, too. That enables them to interact with the IRS directly and on your behalf, so you can take a step back and focus on running your business. When the IRS conducts an audit, they’re essentially looking for proof. Receipts to backup your business expenses, documentation to show your eligibility for the deductions you claimed, and verification of your income. That’s why, as we discussed above, the number one key to surviving an audit is to have that documentation organized and ready to go. Once you’ve spoken with your accountant and handed over all the necessary documents, it’s time to take a step back and let your CPA handle the rest. As we said above, the IRS auditor is a professional whose job is to secure all the tax money the government’s owed. You aren’t a tax professional, and the auditor is. It’s easy to get tripped up or confuse things. That’s why, to be safe, it’s best to limit your contact with the auditor and let your CPA take care of communicating and working with them. Between the horror stories of audits taking over people’s lives to the very real fear of math, it’s easy to panic when that audit notice lands in your mailbox. But as one of our Wave customers learned, going through a government audit isn’t as scary as it’s made out to be. After receiving an audit letter in the fall of 2017, their gut reaction was much the same as most small business owners. My first thought was, “What did I do wrong?” Or, maybe more accurately, “What didn’t I do right?” Could there have been a moment where I did something not quite by-the-book, unintentionally? The audit experience could have been much more difficult,” they shared, “if I hadn’t kept such good records. Dealing with the government can sometimes feel like a zero-sum game—but the vast majority of IRS audits happen because of small errors that are easily solved through correspondence alone. The biggest lesson I learned from this whole experience was that mistakes happen, even when you have the best of intentions… but I was surprised to find that most were fixable. As long as your tax documents are organized and up-to-date, you’re in good shape to resolve the audit pretty painlessly. Keep proper documentation, and only deduct ordinary and necessary business expenses that are allowed by the IRS. Even if you are selected for an audit, you will know you have nothing to worry about. IRS audits aren’t fun, but they don’t have to take over your life or cause unnecessary stress. If you stay organized and let your accountant or CPA handle the auditor, you’ll be in good shape to survive the audit and maybe even learn a little something along the way. The National Small Business Association notes that around 20% of small business owners spend up to 120 hours on taxes each year—that’s a lot of time spent away from actually running your business. That’s why business owners who don’t dread tax season have one thing in common: they keep their books and finances in order all year long. That way, when the year ends, they’re in the best possible shape to take stock of their business and start preparing for tax season. Because year end accounting is about more than just closing your books on the year. It’s an opportunity to look at your business’ performance and financial health—so you can set goals and improvements for the upcoming year. Staying organized and planning ahead also helps you get a jump on activities and record-keeping, lowering your business’ tax liability next year. Tips for staying organized and on top of your books all year long. We briefly touched on some of the key benefits to staying organized and on top of your business accounting. In addition to losing out on those vital benefits, there are also some important reasons why neglecting your books can cause you harm. Here are a few of the biggest reasons. There’s no getting around organizing your receipts come tax season—we all have to sit down and file our taxes no matter what. If you’ve neglected your books all year, you’re only adding undue stress and time to this process at year’s end. Your business’ survival depends on your ability to make investments and pay bills when you need to. If you aren’t properly tracking the money that flows into and out of your business, it’s easy for a crisis of cash flow to hit when you’re least equipped to handle it. Instead, steady bookkeeping helps you keep a pulse on the financial health of your business. As small business owners know, tax time comes more than once a year. If you aren’t keeping tracking of your books, deadlines like your quarterly tax payments can easily sneak up on you or even sneak by without you noticing. No matter what tax deductions you’re eligible for, you can only claim those you have documentation to back up (in case of an audit)—if you aren’t staying on top of those expenses and deductions, you could lose out when it comes time to file. It can seem daunting to keep track of all these documents for the entire year. That’s why we pulled together some key tips for staying on top of your books without sinking unnecessary time and stress into the process. Once you know what you need for your year-end filing next year and beyond, it’s time to put systems in place to track your data. That means stashing away random receipts in a shoebox is no longer an option. The lead up to your year-end filing shouldn’t be the only time you check in on your business finances. To avoid surprises and to run your business efficiently, maintaining visibility into your finances is crucial. And once you’ve gathered the appropriate paperwork and established some expense tracking systems, “checking under the hood” of your business becomes far simpler. Set some time aside in your calendar once a month or once a quarter to dive deep into revenues and expenses. This simple practice can prevent any last-minute surprises from popping up at tax time. Create some calendar alerts to ensure you don’t forget. Regularly reviewing your books can also empower you with insights to run your business better. For example, a quick look at your business finances could reveal that the average time it takes for clients to remit payment is 60+ days. To curb outstanding invoices and improve cash flow, you can set up automatic reminders for clients to pay their invoices after 15 or 30 days. And making this kind of high-impact change isn’t that difficult—it can come from a little bit of data and a quick review of your books. In the midst of all this preparation, you’re likely thinking about the things to come in the year ahead. What major changes will you and your business celebrate in the next 12 months? All of these events can affect your taxable income and may require additional documents when submitting your taxes. Maximize retirement contributions: Saving for retirement not only prepares you for living well in your golden years, but it also lowers your taxable income. Contributing to a qualified savings plan like a 401(k) or an RRSP can help you take advantage of the powers of compound interest while also saving your business some serious cash on your tax bill. Make charitable donations: Helping out qualified non-profits is a win-win for businesses and charities alike. If you make a donation before the end of the year, you can deduct the contribution from your year-end filing to reduce your tax burden. During the year, it’s easy to rationalize letting your books slip. You’re busy, your customers and employees need you, you’re focused on growing the business. But at the end of the day, maintaining up-to-date and accurate books and records is one of the most powerful tools you have to grow your business. From making tax time easier to helping you better understand your business finances and grow, staying organized all year long just makes sense for smart business owners.
2019-04-18T22:46:57Z
https://blog.waveapps.com/the-complete-guide-to-small-business-tax-season/
Amar, Ahmadi Thalip and Tong, Pei Sin and Ng, Casey (2015) The MD2 'Super Sweet' pineapple (Ananas comosus). UTAR Agriculture Science Journal. Ang, Hwee Kwan (2015) Clinic Management System (Rapid Clinic). Final Year Project, UTAR. Ang, Jacqueline Lee Fang (2015) Directional footfall counter system using dance mat. Final Year Project, UTAR. Ang, Su Shin and Liew, Wan Qi and Wai, Kang Chuen and Yeoh, Wan Teng (2015) The influencing factors on generation Y online impulsive buying behaviour. Final Year Project, UTAR. Ang, Wen Kai (2015) Document Format Management. Final Year Project, UTAR. Aow, Siew Cheng and 欧, 秀清 (2015) 马来语中的闽南语借词研究. Final Year Project, UTAR. Babji, A. S. and Nur 'Aliah, D. and Nurul Nadia, M. (2015) Nutritional value and potential of freshwater fish in rivers and mining pools of Malaysia. UTAR Agriculture Science Journal. Babji, A. S. and Nurfatin, M. H. and Etty Syarmila, I. K. and Masitah, M. (2015) Secrets of edible bird nest. UTAR Agriculture Science Journal. Beh, Chin Kean and Ngoo, Hea Hoon and Seow, Tien Yoong and Veronica, Francis Xavier and Yong, Wei Siang (2015) The impact of conditionality of IMF programs on Indonesian economic growth. Final Year Project, UTAR. Beh, Jo Ee and Chong, Swit Yie and Yu, Jeffery Kern Liang and Wong, Jin Yang (2015) The influencing power of different dimension of perceived risks towards purchase intention of consumers on private label brand (Kedai Rakyat 1 Malaysia). Final Year Project, UTAR. Ch'ng, Boon Sing (2015) Characterization of flame retardants addedd ABS. Final Year Project, UTAR. Ch'ng, Eu Gene (2015) Semi-autonomous with directional navigation control. Final Year Project, UTAR. Chai, Kim Yong (2015) Simulation of catalytic combustion of automotive exhaust gas in monolithic reactor. Final Year Project, UTAR. Chai, Yung Joon (2015) Object occlusion and object removal detection. Master dissertation/thesis, UTAR. Chan, Caleb Jia-Le and Cheong, Tzun Hoe and Foo, Wen Seng and Heng, Zhen Xuan and Ho, Yong Feng (2015) Determinants of stock volatility: evidence from Malaysia. Final Year Project, UTAR. Chan, Cheng Man and Chang, See Men and Chong, Yee Sin and Tang, Chia Urn (2015) The influence of job stress, burnout and job satisfaction among primary school teachers in Ipoh. Final Year Project, UTAR. Chan, Wei Yang (2015) Bioassay-guided isolation of bio-active antibacterial compounds from the leaves of Wedelia trilobata. Final Year Project, UTAR. Chang, Boon Chiao (2015) 32-bit 5-stage RISC pipeline processor with 2-Bit dynamic branch prediction functionality. Final Year Project, UTAR. Chang, Pui Yee and Ng, Min Qi and Sim, Hau Yong and Yap, Jing Wee and Yin, Suet Yee (2015) Factors influencing behavioural intention to adopt mobile e-books among undergraduates: UTAUT2 framework. Final Year Project, UTAR. Cheah, Chew Ten and 谢, 秋婷 (2015) 从《解老》篇及《喻老》篇论韩非归本与黄老之思想. Final Year Project, UTAR. Chee, Viviane Shu Vei (2015) Perspectives Of Building Professionals On Adoption Of Green Building Index (Gbi) Classification: Motivators, Hurdles And Recommendations. Final Year Project, UTAR. Chen, Keng Ling and 曾, 庆龄 (2015) 论黄润岳自传性散文的特色. Final Year Project, UTAR. Chen, Si Qi (2015) Charaterization of alumina trihydrate added acrylonitrile-butadiene-styrene. Final Year Project, UTAR. Cheng, Chee Teng and 曾, 紫婷 (2015) 论张可久咏史怀古散曲. Final Year Project, UTAR. Cheong, Wen Chiet (2015) Applications of Rigid-Plastic Finite Element Analysis Using Eulerian Meshing Scheme to Simulate Forging and Sheet Metal Clinching Processes. Master dissertation/thesis, UTAR. Chew, Mun Kit and Lau, Zhen Hui and Ooi, Siew Ling and Lee, Yen Yun and Yee, Ann Gie (2015) The effectiveness of leadership on organizational commitment in banking industry. Final Year Project, UTAR. Chew, Sin Yee and Chong, Yik Yue and Lim, Kian Cheong and Tan, Kian Lin and Tee, Yi Hui (2015) Bilateral or unilateral the relationship between the government issue (GII) issuance and macroeconomic variables. Final Year Project, UTAR. Chew, Woon Sin (2015) Investigation of TMPTMA added ABS, LDPE, HDPE and EVA with electron beam. Final Year Project, UTAR. Chia, Geng Seng and Wong, Alysa Tseu Mun and Chin, Su Teng and Lee, Jia Ling and Wong, Kah Yen (2015) A study of behavioural finance: Industry-based overreaction and underreaction in Malaysia. Final Year Project, UTAR. Chiew, Feng Theng and Loy, Heng Ying (2015) Relationship between Bioethanol Production and Agricultural Commodity prices: For the case of Thailand. Final Year Project, UTAR. Chiew, Men Yee (2015) Development and Characterisation of AML-M5-Derived Induced Pluripotent Stem Cells (IPSCS). Master dissertation/thesis, UTAR. Chin, Chee Yang (2015) Smart navigation system - building recognition server development. Final Year Project, UTAR. Chin, Chun Lek (2015) 32-bit memory controller design: design of memory controller for micron SDR SDRAM. Final Year Project, UTAR. Chin, Jeffery Guo Jun (2015) Investigation of electron beam irradiated acrylonitrile-butadiene-styrene (ABS) under oven treatment. Final Year Project, UTAR. Chin, See Wah (2015) The effects of foreign equity ownership and foreign directors’ presence on financial performance of Malaysian listed manufacturing companies. Master dissertation/thesis, UTAR. Choi, Jien Ping (2015) Predicting MUET Questions. Final Year Project, UTAR. Choi, Kean Boon (2015) Development of Small Floating Platform with Tuned Liquid Column Damper. Final Year Project, UTAR. Chong, Kok Fui and 张, 国辉 (2015) 焦虑——李商隐诗之自恋情怀的主因. Master dissertation/thesis, UTAR. Chong, Kok Yung (2015) Flow Induced Acoustics in Corrugated Pipe. Final Year Project, UTAR. Chong, Sir Cong (2015) Study of the high temperature solid state hydrogen sensor based on wide bandgap semiconductor material. Master dissertation/thesis, UTAR. Chong, Yu Yun and 张, 瑜芸 (2015) 茅坤论韩愈、柳宗元之文. Final Year Project, UTAR. Chow, Ker Li and 周, 可荔 (2015) 宋子衡小说的悲剧性研究. Final Year Project, UTAR. Chow, Suet Yan (2015) Effect of various growth conditions on pigmentation of Burkholderia cepacia UL1. Final Year Project, UTAR. Chu, Man Yee and Foong, Bi Kai and Lai, Chee Liang and Pang, Ai Nee (2015) Foreign students' enrolment in Malaysian higher education institution. Final Year Project, UTAR. Chu, Sin Yen (2015) Performance of the Finite-Order Dirichlet and Special Additive Update Universal Portfolios. Master dissertation/thesis, UTAR. Chua, Li Rou and Chow, Kim Teng and Loo, Mun Yee and Wong, Paik Kei (2015) Health promotion through social media: a case study of Twitter and obesity. Final Year Project, UTAR. Chuah, Xing Mei and Cha, Pei Chin and Ho, Wan Loo and Ku, Poh Tin and Ng, Kaih Won (2015) The effect of corporate governance on dividend policy: trading/services sector in Malaysia. Final Year Project, UTAR. Chung, Chen Seng (2015) Development of Natural Rubber/Graphene Derivatives-Bentonite Nanocomposites. Final Year Project, UTAR. Darren Patrese, Pada (2015) Production of Fatty Acid Methyl Ester via Subcritical Ethyl Acetate. Final Year Project, UTAR. Devendra, C. (2015) Sustaining animal-agriculture in a changing landscape - improving lives and livelihoods. UTAR Agriculture Science Journal. Devendra, C. (2015) Transforming the potential of small farms in Asia for food production. UTAR Agriculture Science Journal. Dhaarshini, Balachandaran (2015) The adoption intention of Near Field Communication (NFC) - enabled mobile payment among consumers in Malaysia. Master dissertation/thesis, UTAR. Fong, Kee Fei (2015) The Moderator Role of Advertisement in Influencing the Purchasing Intention of Recyclable Product. Master dissertation/thesis, UTAR. Foo, Alex Tun Lee (2015) Communication Apprehension and Temperament: A Communibiological Perspective in Accounting Education. Master dissertation/thesis, UTAR. Foong, Kok Chong (2015) Application of electrochemical methods in wastewater treatment: desorption and recovery of zinc. Final Year Project, UTAR. George, Rebekah (2015) Prevalence of primary headache disorders in university students and Association of Methylenetetrahydrofolate Reductase (MTHFR) A1298C polymorphisms (RS1801131) variant is associated to migraine headaches. Final Year Project, UTAR. Goh, Dih Jiann (2015) Design and Development of Memory System for 32-bit 5 Stage Pipeline RISC: Memory System Integration. Final Year Project, UTAR. Goh, Hui Ting (2015) Molecular cloning of two virulence genes from Agrobacterium tumefaciens. Final Year Project, UTAR. Goh, Phooi Yee and Chua, Hui Yun and Lee, Ai Ping and Moo, Chew Yun and Tham, Mei Kin (2015) The relationship between corruption and economic growth in Malaysia. Final Year Project, UTAR. Goh, Sheng Meng (2015) Engineering Properties Of Lightweight Foamed Concrete With 10% Eggshell As Partial Cement Replacement Material. Final Year Project, UTAR. Goh, Shin Yean (2015) M-learning application for PT3 mathematics (Solid Geometry). Final Year Project, UTAR. Goh, Sze Yin (2015) Factors Influencing Life Insurance Consumption. Master dissertation/thesis, UTAR. Guan, Hui Shan (2015) Gradient-Type Methods for Unconstrained Optimization. Final Year Project, UTAR. Har, Kah Men and 夏, 嘉敏 (2015) 新山华人街名浅析. Final Year Project, UTAR. Haw, Xin Yee and 侯, 莘怡 (2015) 论麻坡抗日筹款模范区(1937-1941)的集体记忆——以范仕铜、吴天赐和颜其仁为口述访谈对象. Final Year Project, UTAR. Heng, Teoh Chin (2015) Intelligent Control For Energy Management System For Reducing Ancillary Services. Master dissertation/thesis, UTAR. Heng, Wepoh (2015) Synthesis of Triacetin from Glycerol. Final Year Project, UTAR. Herng, Tan Lih (2015) Faculty and Administrators’ Involvement and Participation Towards University Governance in Klang Valley Malaysia. Master dissertation/thesis, UTAR. Hia, Yi Lin (2015) A study on the effects of insulin, TNF-Alpha and antioxidants on the expression of ATP-Binding cassette transporter 1 (ABCA1) in HEPG2 cells. Final Year Project, UTAR. Hieu, Sooh Peng and Ng, Jo Anne (2015) The Intersecting Ethnic Identity on LGB With Internalized Homophobia and Coming Out in Malaysia. Final Year Project, UTAR. Ho, Bing Han and Goh, Wei Kiat and Ng, Chow Yet and Pang, Kim Hao and Toh, Si Hui (2015) Macroeconomic and financial determinants of Malaysian residential property markets. Final Year Project, UTAR. Ho, Lai Yee and Chong, Li Yeng and Preema, Thevi and Ho, Stephen Wai Chien (2015) The causal relationship between GDP population growth, trade and CO2 emissions in China. Final Year Project, UTAR. Ho, Xin Ni and 何, 欣霓 (2015) 萧红小说死亡书写研究. Final Year Project, UTAR. Hoo, Zhou Yang (2015) Automated Fruit and Flower Counting using Digital Image Analysis. Final Year Project, UTAR. J. N., Milsum (2015) Malayan banana varieties in 1919. UTAR Agriculture Science Journal. Jeng, Yong Tzyy (2015) Preparation and Characterization of Controlled Release Fertilizers Using Alginate-Based Superabsorbent Polymer For Plantations In Malaysia. Master dissertation/thesis, UTAR. Jin, Lim Su (2015) Enabling Avqos For Adaptive Streaming In Heterogeneous Network Environment Utilizing Non-Intrusive Bandwidth Information. Master dissertation/thesis, UTAR. Jin, Zhe (2015) Privacy Preserving Minutia-Based Fingerprint Template Protection Techiniques. PhD thesis, UTAR. Kam, Heng Keong (2015) Effect of Tool Eccentricity on the Static and Fatigue Strength of the Joint made by Mechanical Clinching Process. Master dissertation/thesis, UTAR. Kang, Qin Yee (2015) Development of Low-Cost Precision Scanner System for Scanning Probe Microscopy Application. Final Year Project, UTAR. Kee, Yong Hong and Leow, Xin Yi and Tang, Xin Yi and Wong, Siong Mung (2015) Research study on adoption of social media marketing in the enterprise (Malaysia context). Final Year Project, UTAR. Khoo, Cherie Jia Ying and 邱, 加颖 (2015) 论霹雳州十八丁泉成炭窑的开创、发展和转型(1930-2014)——以蔡招安和蔡招成为口述访谈对象. Final Year Project, UTAR. Khoo, Jun Xiang (2015) Smartphone Based Augmented Reality. Final Year Project, UTAR. Khoo, Koay Chin Yau (2015) Automation of internet of things connection. Final Year Project, UTAR. Khoo, Wai Lung (2015) A Bring-Your-Own-Device Framework over Software Defined Networking. Final Year Project, UTAR. Khow, Hong Way (2015) Design and Development of a Brain Computer Interface Controlled Robotic Arm. Final Year Project, UTAR. Kian, Yong Wui (2015) Islanded Operation Of Photovoltaic Sysytems with The Use Of Bi-Directional Inverters. Master dissertation/thesis, UTAR. Kiat, Wei Pau (2015) Wireless controlled robot with hand for dangerous task. Final Year Project, UTAR. Kiet, Yee Chun (2015) Computational Fluid Dynamics (Cfd) Study On Air Flow Characteristics Of Ceiling Air Diffuser In Room. Master dissertation/thesis, UTAR. Koh, Peng How (2015) Smart navigation system - mobile navigation client development. Final Year Project, UTAR. Kok, Hui Chin and Hoe, Shiau Wei and Law, Yee Leng and Ng, Tai Ti and Ngoh, Chee Hui and Phoong, Yu Soon (2015) Boxiture (multipurpose furniture). Final Year Project, UTAR. Kok, Shee Loon and 郭, 书麟 (2015) 论林幸谦离散书写中的创作转变与认同意识. Final Year Project, UTAR. Kok, Yan Yin (2015) Removal of heavy metal using membrane technology. Final Year Project, UTAR. Kong, Su Zen (2015) Differences in Perceptions and Practices of Cultural Dimensions in the Malaysian Construction Industry. Final Year Project, UTAR. Krishna, Shirley Sugita (2015) An Empirical Study Of Factors Influencing Secondary School Selection. Master dissertation/thesis, UTAR. Kuhanraj, Vijayan (2015) Development of Nitrile Butadiene Rubber - Graphene Oxide Particles Filled Polyvinyl Chloride Composites. Final Year Project, UTAR. Kwang, Chyun Yaw (2015) Sentiment Analysis of Hotel Service System. Final Year Project, UTAR. Lai, Ming Jie (2015) Reduced Reference Image Quality Assesment. Final Year Project, UTAR. Lai, Phei Ling and Tan, Kylie Ke You and Ong, Hwee Wen and Lee, Vivian Wan Theng (2015) An empirical study on trade openness and income inequality in Latin America incorporating with FDI inflows, GDP growth and inflation. Final Year Project, UTAR. Lai, Tat Chuan and 黎, 达铨 (2015) 《韩非子》之“圣”研究. Final Year Project, UTAR. Lam, Chee Liang and Law, Siew Foon and Loo, Yoo Jia and Ng, Wan Yin and Ooi, Soo Ling (2015) A study on factors affecting employee retention in nursing industry at Klang Valley. Final Year Project, UTAR. Lau, Hui Chin and 刘, 慧君 (2015) 《太平广记》与《聊斋志异》中 蛇的形象研究. Master dissertation/thesis, UTAR. Lau, Kah Yan and Lee, Sook Yan and Looi, Lee Kheng and Tan, Yee Ler (2015) Factors influencing the adoption of mobile application among tourism organizations in Malaysia. Final Year Project, UTAR. Lau, Lin Sea (2015) Carbon Dioxide Emission, Institutional Quality, and Economic Performance: A Comparative Analysis Between Developed and Developing Countries Carbon Dioxide Emission, Institutional Quality, and Economic Performance: A Comparative Analysis Between Developed and Developing Countries. PhD thesis, UTAR. Law, Voon Ee and Gee, Oon Kei and Tai, Mun Ying and Cheng, Siew Yee and Ng, Sze Wee (2015) The effect of training and development provided by organization towards employee productivity in hotel industry. Final Year Project, UTAR. Lay, Min Kei and 赖, 敏琦 (2015) 从《论语》所见隐者观孔子进退出处抉择. Final Year Project, UTAR. Lee, Brenda Hooi Fern (2015) Hoax categorization. Final Year Project, UTAR. Lee, Brenda Mun Yee (2015) BYOD public system. Final Year Project, UTAR. Lee, Cornelius Chieng Kwang (2015) Investigation Of The Role Of Cellular Prion Protein In The Invasiveness And Survival Of Ls 174t Colorectal Cancer Cells. Master dissertation/thesis, UTAR. Lee, Eu Jin (2015) Bus Tracking and Ticket Payment System for UTAR. Final Year Project, UTAR. Lee, Guan Chuan (2015) Correlating website functionality with popularity. Final Year Project, UTAR. Lee, Jian (2015) Effect of fluid's reynolds number and spacer filaments flow attack angle in spacer-filled channel. Final Year Project, UTAR. Lee, Joon Hoe (2015) Chat system for student and lecturer. Final Year Project, UTAR. Lee, Kar Shie and Yeoh, Seng Hou and Ng, Zi Cong and Wang, Ying and Gan, Thai Wee (2015) Factors that influence the consumer behavior on choices of local commercial bank for banking products and services in Perak. Final Year Project, UTAR. Lee, Kok Wei and Chong, Elaine Ket Yin and Yeong, Weng Fatt and Chan, Mun Yee and Kwa, Sin Yee (2015) Barriers of mobile commerce adoption among generation X in Malaysia. Final Year Project, UTAR. Lee, Ren Siong (2015) Water Absorption and Strength Properties of Lightweight Foamed Concrete With 2.5 % And 5.0 % Eggshell as Partial Cement Replacement Material. Final Year Project, UTAR. Lee, Sock Im (2015) Site-directed mutagenesis of superfolder green fluorescent protein. Final Year Project, UTAR. Lee, Teong Hoe (2015) E-Education portal for students and tutors. Final Year Project, UTAR. Lee, Thung and 李, 潼 (2015) 浅析《女戒》诞生的因素——以《汉书》及《后汉书》为研究. Final Year Project, UTAR. Lee, Weng Hong (2015) Evaluating Optical Backhaul of a Fiber and Wireless Hybrid Broadband Network. Final Year Project, UTAR. Leong, Karh Chuen and 梁, 家迳 (2015) 传统与革新:疑古时代之史学观. Final Year Project, UTAR. Leong, Lai Chin (2015) Bioethanol Production by Using Pitaya Fruit Peel Waste as Carbon Source. Master dissertation/thesis, UTAR. Leong, Wai Hang (2015) FICT Final Year Project IDEAS Bank. Final Year Project, UTAR. Leong, Wei De and Chiang, Shui Yan and Lim, Li Ting and Lye, Yan Bing and Yaw, Siao Pin (2015) The impact of CEO characteristics and board governance toward CEO compensation: evidence on Malaysia's listed consumer product sector. Final Year Project, UTAR. Leong, Weng Woh (2015) Development of Reduced Graphene Oxide/High Density Polyethlene (RGO/HDPE) High Performance Nanocomposites. Final Year Project, UTAR. Leong, Yong Yin and Chong, Pui Yee and Khor, Jia Yi and Lee, Yew Hong and Lau, Lyon Yee Onn (2015) How the FDI, inflation, exchange rate and human capital affect the labor productivity in Finland. Final Year Project, UTAR. Liat, Lim Boo (2015) The field rats and field mouse in Malaysia and Southeast Asia. UTAR Agriculture Science Journal. Liat, Lim Boo (2015) The house rodents and house shrew in Malaysia and Southeast Asia. UTAR Agriculture Science Journal. Liew, Ken Nam (2015) Fingerprint Recognition Student Attendance Management System. Final Year Project, UTAR. Liew, Yean Sien (2015) Factors Influencing Consumers Purchase Intention towards Online Group Buying In Malaysia. Master dissertation/thesis, UTAR. Liew, Yean Sien (2015) Factors Influencing Consumers’ Purchase Intention Towards Online Group Buying In Malaysia. Final Year Project, UTAR. Liew, Yee Chang (2015) Intelligence Semi-Automated Wheelchair (Obstacle Detection System). Final Year Project, UTAR. Lim, Chee Xuan (2015) Temperature reduction in buildings by geothermal air cooling system. Final Year Project, UTAR. Lim, Chin Aik (2015) Development of Nylon-6waste Tire Powder (Nylon-6wtp) Thermoplastic Elastomer for High Performance Applications. Final Year Project, UTAR. Lim, Christopher Yi-Jin (2015) Comparative Performance Tests on Thermosyphons and Application as Heat Pipe Heat Exchanger Cooler Operating Between 30-100°C. Master dissertation/thesis, UTAR. Lim, Chui Chin and Lim, Heok Leng and Ng, Shu Yuan and Phan, Yi Xiong (2015) Determinants of travel intention among foreign student in Malaysia - perspective from push-pull motivations. Final Year Project, UTAR. Lim, Ee Zheng and Phang, Zhen Bin and Foong, Mui Leng and S'ng, Joy Hwei Mum and Tiong, Xin Yi (2015) Talent retention: A study in Malaysia manufacturing industry. Final Year Project, UTAR. Lim, Hoei Shian and 林, 惠湘 (2015) 李商隐与女冠诗. Final Year Project, UTAR. Lim, Hui Leng (2015) Strategic importance of human resource practices on turnover intention of private universities in Malaysia. Master dissertation/thesis, UTAR. Lim, Jun Xian (2015) Seismic verification of RC building in Malaysia to EC8-! proposed NA. Final Year Project, UTAR. Lim, Jun Yuen (2015) Metasearch engine on properties. Final Year Project, UTAR. Lim, Kian Heng (2015) Universal Portfolio generated by Idempotent Matrix and some Probability Distribution. Master dissertation/thesis, UTAR. Lim, May Chi (2015) Colour tone for image. Final Year Project, UTAR. Lim, Wei Siong (2015) Development of Interactive Piano Educational Software for Music Learning. Final Year Project, UTAR. Ling, Kuan Hoe (2015) Synthesis and Characterisation of Gallium, Molybdenum and Gallium-Molybdenum Doped Vanadium Phosphorus Oxide Catalysts for Oxidation of N-Butane to Maleic Anhydride. Master dissertation/thesis, UTAR. Loh, Chun Yong and Chai, Yu San and Chong, Sin Yee and Lee, Beau Sin and Tan, Seong Yi (2015) Macroeconomic variables on banks' non-performing loans in Malaysia. Final Year Project, UTAR. Loh, Hwei Shien (2015) Augmented Tour Solution. Final Year Project, UTAR. Loh, Wei Chun (2015) Design and development of an application for an individual insurance agent. Final Year Project, UTAR. Loi, Saw Ming (2015) The moderating effect of reward in the relationship between work engagement and job performance. Master dissertation/thesis, UTAR. Loo, Chee Meng (2015) Modelling of the Light Emitting Diode (LED) Heating/Cooling Process. Final Year Project, UTAR. Loo, De Jing (2015) Molecular cloning of the T7 RNAP T7RNAP gene and chloramphenicol acetyltransferase CAT gene. Final Year Project, UTAR. Low, Chui Yin and 刘, 翠茵 (2015) 论黎紫书《告别的年代》和黄碧云《烈女图》女性书写的比较研究. Final Year Project, UTAR. Low, Foon Siang (2015) Application of Japanese Project Management Methods (P2M/KPM) In Japanese Organisations in Japan and Malaysia. PhD thesis, UTAR. Low, Yon Tyng and 刘, 咏亭 (2015) 马六甲潮州人“出花园”成年礼的习俗演变——以沈俊城为口述访谈对象. Final Year Project, UTAR. Lup, Foo Chee (2015) Income Diversification And Bank Performance In Dual Banking System. Master dissertation/thesis, UTAR. Lye, Guang Xing (2015) Study user behaviour through event sharing module in mobile UniCAT system. Final Year Project, UTAR. Ma, Feng and 马, 峰 (2015) 马来西亚、新加坡、印尼华文女作家小说比较研究. PhD thesis, UTAR. Mah, Bee Ling (2015) Investigation of polyvinyl alcohol (PVOH) added kenaf nanowhisker and montmorillonite (MMT). Final Year Project, UTAR. Meng, Yap Thai (2015) Investigation On The Root Causes Of High Loading At The Anchor Point Of High Pressure Steam Piping On Piperack. Master dissertation/thesis, UTAR. Mexen, Chang, (2015) Perceived Factors Influencing the Acceptance and Adoption of Self-service Technology. Master dissertation/thesis, UTAR. Mok, Shao Feng (2015) Oral Microbiome Variations Associate with Normal, Pre-Cancerous and Cancerous Oral Conditions Based on 16S rDNA Sequencing and Denaturing Gradient Gel Electrophoresis. Master dissertation/thesis, UTAR. Moy, Xue Min (2015) Turnover intention among Malaysia private higher education institutions Generation Y academicians: the mediating effect of employee engagement. Master dissertation/thesis, UTAR. Muhilarasi, Sungevie (2015) Partial genomic characterization and isolation of endolysin from bacteriophage SFN6B against Shigella flexneri. Final Year Project, UTAR. Nagayah, Sujatha (2015) A Study On External Causes For Organizational Conflict Among Firms In Klang Valley. Master dissertation/thesis, UTAR. Nalini, Devi Verusingam (2015) Development and Characterization of Induced Pluripotent Cells (IPSCS) Derived from Human Oral Squamous Carcinoma Cell (OSCC). Master dissertation/thesis, UTAR. Navindra, Raj (2015) The hardware implementation of “Smart Gate” for UTAR. Final Year Project, UTAR. Nee, Teck Soon (2015) The Contribution of Knowledge Management to Project Management Performance in Engineering Project-Based Organisations. Master dissertation/thesis, UTAR. Neoh, Kuang Hong (2015) Preparation of Pd-Pt/AL2O3 bimetallic catlyst with charge enhanced dry impregnation method. Final Year Project, UTAR. Ng, Casey (2015) Bitten by the banana bug. UTAR Agriculture Science Journal. Ng, Casey (2015) Citrus for New Year prosperity, luck and abundance. UTAR Agriculture Science Journal. Ng, Casey (2015) Plant leaves in food preparation and packaging. UTAR Agriculture Science Journal. Ng, Casey (2015) Pomelo - Citrus maxima - the indigenous mega-citrus of South-East Asia. UTAR Agriculture Science Journal. Ng, Casey (2015) A survey of cultivated bananas in Perak. UTAR Agriculture Science Journal. Ng, Epin (2015) Gender classification from facial images. Final Year Project, UTAR. Ng, F.S.P (2015) A brief history of bananas. UTAR Agriculture Science Journal. Ng, F.S.P. (2015) Burnt earth for potted plants. UTAR Agriculture Science Journal. Ng, F.S.P. (2015) Design for longevity – the maximum age of trees and palms. UTAR Agriculture Science Journal. Ng, F.S.P. (2015) New Book: Tropical Forest Fruits, Seeds, Seedlings and Trees. UTAR Agriculture Science Journal. Ng, F.S.P. (2015) Tropical rain forest in a shopping mall. UTAR Agriculture Science Journal. Ng, F.S.P. (2015) The rain tree - Samanea saman - and its yellow form. UTAR Agriculture Science Journal. Ng, F.S.P. and Tong, P.S. (2015) Plant breeding in the tropics at Green World Genetics. UTAR Agriculture Science Journal. Ng, Foong Mun (2015) Enhanced Sensing Sensitivity of Long Period Fiber Grating by Self-Assembled Polyelectrolyte Multilayers. Final Year Project, UTAR. Ng, Francis (2015) Breeding the papaya—Carica papaya. UTAR Agriculture Science Journal. Ng, Hock Li (2015) Inventory control management system for wholesale business. Final Year Project, UTAR. Ng, Kai Wen and 黄, 凯雯 (2015) 论李贽的“人道”思想及其局限. Final Year Project, UTAR. Ng, Kieng Leng (2015) Factors Influencing Consideration on Multilevel Marketing (MLM) Enrollment. Master dissertation/thesis, UTAR. Ng, Wai Yip and Chia, Chee Choong and Lee, Jian Yi and Lim, Hui Chee and Hong, Ang Hoe (2015) The factors affecting product innovation of manufacturing industry in Malaysia. Final Year Project, UTAR. Ng, Wee Hau (2015) A MULTIMEDIA APPLICATION IN LEARNING ENGLISH THROUGH LYRICS. Final Year Project, UTAR. Ng, Wei Yun and 黄, 伟云 (2015) 陈函辉〈霞客徐先生墓志铭〉与徐霞客事迹研究. Final Year Project, UTAR. Ng, Yee Fhan (2015) Preparation of pellet catalyst with non-uniform metal distribution. Final Year Project, UTAR. Ning, Lim Zhen (2015) Design And Application Of Network-On-Chip Virtual Prototyping Platform. Master dissertation/thesis, UTAR. Ong, Meng Leong and Cheang, Wei Chee and Tsen, Evelyn Jin Ying and Low, Hooi Chyi and Ng, Hong Siang (2015) Factors of consumer bankruptcy: A case study in the United States. Final Year Project, UTAR. Ong, Shan Jia and Ching, Li Shan (2015) Public Stigmatization Towards Mental Illness in Kampar, Perak. Final Year Project, UTAR. Ooi, Aun Chuan (2015) Common insect pests of rice and their natural biological control. UTAR Agriculture Science Journal. Ooi, Jie Ru and Sim, Siew Chu and Pah, Xin Yuan and Cheah, Benjamin Bo Huang and Teo, Yee Fang (2015) Employees' perceptions on corporate social responsibilities and affective commitment: Moderating effect of gender. Final Year Project, UTAR. Ooi, Peter A.C. (2015) Biological control of agricultural pests. UTAR Agriculture Science Journal. Pang, Junn Min (2015) Behavioral Analytics in Video Surveillance. Master dissertation/thesis, UTAR. Pang, Suk Min (2015) Factors Influencing Consumer’s Willingness to Purchase Private Label Brands. Final Year Project, UTAR. Pang, Tan Chin (2015) Investigation Of Factors Influencing Generation Y’s Purchase Intention On Functional Energy Drinks. Master dissertation/thesis, UTAR. Pang , Suk Min (2015) Factors Influencing Consumers Willingness to Purchase Private Label Brands. Master dissertation/thesis, UTAR. Phang, Daniel Jen Wye and Kwan, Su Ann and Lua, Hui Shan and Sim, Yee Roo and Tan, Shei Ni (2015) The impact on audit fees after IFRS Convergence: An investigation in trading and services industry. Final Year Project, UTAR. Phoong, Wei Siang (2015) Software User Interface and Algorithm Development for Signal and Noise Characterization. Final Year Project, UTAR. Pin, Teh Yew (2015) Evaluation Of Design Guidelines: Questionnaire Design For Evaluating Children Educational App. Master dissertation/thesis, UTAR. Ping, Ang Tun (2015) The Relationship Between Leadership Styles And Employees’ Job Satisfaction In Small And Medium Enterprises (Smes). Master dissertation/thesis, UTAR. Puah, Yan Jun (2015) Trends in Rainfall Pattern and Spatial Variation – Case Study: Langat River Basin. Master dissertation/thesis, UTAR. Puan, Arthur Chok Ho (2015) Exception handling for 5-stage pipeline micro-architecture. Final Year Project, UTAR. Qing, Tan Yin (2015) Mindfulness Meditation Improves Brain-Computer Interface (Bci) Performance. Master dissertation/thesis, UTAR. Quek, Tong Ern and 郭, 彤恩 (2015) 《海上花列传》:空间建构与小说叙事论析. Final Year Project, UTAR. Qwai, Loh Pui (2015) The Determinants Of Attitudes Towards Organic Food In Malaysia. Master dissertation/thesis, UTAR. Ramachendrin, Arvind Devar (2015) In Vitro activity of local plants from Malaysia against Chikungunya virus. Final Year Project, UTAR. Sailesh Kumar, Nganasekarann (2015) The Study of Oil Spillage Clean Up using Polymers. Final Year Project, UTAR. Sandip, Singh (2015) Optimization of Biodiesel Production via Reflux Condenser Methyl Acetate Reaction from Cerbera Odollam (Sea Mango). Final Year Project, UTAR. Saravana Kumar, Manavalan (2015) Development and Characterization of Porous Epoxy/Bentonite Clay particles through Water-Oil Homogenization Method. Final Year Project, UTAR. Saw, Jing Xien (2015) An Investigation Into The Behaviour Of Simply-Supported Bamboo-Geotextile Composite System. Final Year Project, UTAR. Sheng, Lee Kok (2015) A Study Of Steel Structure Design & Build System In Construction Industry. Master dissertation/thesis, UTAR. Shin, Voo Chuang (2015) Injection Moulding: Warpage Reduction By Reducing Residual Stress. Master dissertation/thesis, UTAR. Sia, Pow Ping and 谢, 宝駍 (2015) 《马来纪年》的道家史笔考析. Master dissertation/thesis, UTAR. Sim, Jia Genn and Cheow, Chee Yong and Chong, Su Jen and Ho, Poh Lim and Lee, Chiew Ling (2015) Relationship between FDI inflows and corruption in 5 selected ASEAN countries. Final Year Project, UTAR. Sing, Wong Koh (2015) Machine Learning Approach to Opinion Mining. Master dissertation/thesis, UTAR. Siow, Wan Zing and 萧, 婉君 (2015) 论森州育侨华小的校史变迁与葫芦顶新村社区的关系——以口述历史为方法. Final Year Project, UTAR. Sitaram, Githadewi (2015) A Study On Employees’ Perception Of Organization Corporate Social Responsibility Towards Employee Commitment And Organization Performance. Master dissertation/thesis, UTAR. Sivanes, Murugaiah (2015) A Review Study of Floating, Production, Storage and Offloading (FPSO) Oil and Gas Platform. Final Year Project, UTAR. Soon, Nee Teck (2015) The Contribution of Knowledge Management to Project Management Performance in Engineering Project-Based Organisations. Final Year Project, UTAR. Soon, Zheng Foong (2015) Investigation of electron beam irradiated polystrene under oven treatment. Final Year Project, UTAR. Tai, Angeline Wei Jing (2015) Controlling lab PCs using wi-fi. Final Year Project, UTAR. Tai, Guan Hwee and Chong, Lee Woon and Low, Kee Kee and Tan, Ling Lee and Tan, Seow Cheng (2015) Factors influencing customer loyalty in airline industry in Malaysia. Final Year Project, UTAR. Tai, Zu Jie (2015) Data Analysis using Particle Swarm Optimization Algorithm. Final Year Project, UTAR. Tan, Cheow Hoong (2015) Development of Thermal Interface Material. Final Year Project, UTAR. Tan, Chung Seong (2015) Development of DC-DC Buck-Boost Converter for Bi-Directional Power Flow Inverter. Final Year Project, UTAR. Tan, Jee Chin and 陈, 奕进 (2015) 草丛身影——论鲁迅《野草》的身体书写. Final Year Project, UTAR. Tan, Kae Yi (2015) User-Driven Story Generator Using Public Knowledge Base. Master dissertation/thesis, UTAR. Tan, Kok Keat and Law, Wei Liang and Ong, Ee Ming and S'ng, Kee Hoe and Tan, Chia Chien (2015) The causality between crude oil and gold markets in Russia. Final Year Project, UTAR. Tan, S. L. and Zaharah, A. (2015) Tuber crops. UTAR Agriculture Science Journal. Tan, S.L (2015) Sweetpotato - Ipomoea batatas - a great health food. UTAR Agriculture Science Journal. Tan, S.L. (2015) Cassava – silently, the tuber fills. UTAR Agriculture Science Journal. Tan, Sean Chun Aun and Choo, Yen Yee and Tee, Denise Yin Ning and Mau, Wei Ying and Tan, Xiau Chuin (2015) The changes of housing price and its relationship with the macroeconomic factors in the United States. Final Year Project, UTAR. Tan, Wey Yao and Goh, Kin Hou and Heng, Jia Min and Lim, Mong Ru and Tan, Yann Hao and Tan, Siew Kek (2015) Motives of property investment among Utar's staff in Kampar. Final Year Project, UTAR. Tan, Yee Kuan (2015) Phytochemicals screening and antibacterial activity of Andrographics paniculata. Final Year Project, UTAR. Tay, Yi Hui (2015) Development of Nylon-6/Graphene Oxide (GO) high Performance Nanocomposites. Final Year Project, UTAR. Teck, Lam Tin (2015) Factors Affecting Organizational Identification Among Gen X and Gen Y in Malaysia Private Sector. Master dissertation/thesis, UTAR. Tee, Lilian and 郑, 丽莲 (2015) 张溥论“二潘”之比较研究. Final Year Project, UTAR. Teh, Kian Chong (2015) Badminton Game Analysis. Final Year Project, UTAR. Teh, Tict Chuan and Teo, Brandon E Jye and Goon, Wei Liam and Tan, Shi Ling (2015) Manipulation in crude oil futures markets: evidence from price-volume relationship. Final Year Project, UTAR. Teh, Yong Hui (2015) Labview Based Pid Algorithm Development for Z Motion Control in Atomic Force Microscopy. Final Year Project, UTAR. Teng, Choon Yong and 丁, 俊勇 (2015) 辛金顺诗文之空间书写. Final Year Project, UTAR. Teng, Sin Liang (2015) Camouflage cursor anti-shoulder surfing technique. Final Year Project, UTAR. Teo, Kheng Hwang and Ng, Siew Wen and So, Boon En and Tan, Wee Kiong and Yu, Khai Chien (2015) The determinants of Malaysian stock market performance. Final Year Project, UTAR. Teo, T.M. (2015) Effectiveness of the oil palm pollinating weevil, Elaeidobius kamerunicus, in Malaysia. UTAR Agriculture Science Journal. Teo, Yi Rui and 张, 忆蕊 (2015) 论柔佛东甲启明一小华校发展史和东甲第二新村的集体记忆. Final Year Project, UTAR. Teoh, Kian Tat and Ng, Sara Phui Yeng and Khoo, Hui Ru and Voon, Soon Yoong and Yeep, Pui Lin (2015) Effectiveness of martingale strategy in gambling and investment. Final Year Project, UTAR. Teoh, Ling Wei (2015) Synthesis and characterization of organotungsten complex with mixed P/S ligand. Final Year Project, UTAR. Thiem, Woh Mun (2015) Investigate the Feasibility of Implementing Earth-To-Air Heat Exchanger (Eahe) As a Sustainable Cooling System in Malaysia. Final Year Project, UTAR. Thriumalai, Komala (2015) Isolation and Characterization of Naturally Occurring Calcite-Forming Bacteria in Malaysia. Master dissertation/thesis, UTAR. Ti, Wee Ming (2015) Antioxidant profile and antioxidant activity of Artemisia argyi. Final Year Project, UTAR. Tiong, Pei Kee (2015) Age Group Estimation from Face Images. Final Year Project, UTAR. Tiu, Ervin Shan Khai (2015) Engineering properties of lightwieght foamed concrete with 7.5% eggshell as partial cement replacement material. Final Year Project, UTAR. Tong, P.S. (2015) Trees and sunlight. UTAR Agriculture Science Journal. Tong, P.S. (2015) Zenxin - an organic farming journey. UTAR Agriculture Science Journal. Too, Yuen Xian (2015) The effect of China’s outward foreign direct investment on economic growth. Master dissertation/thesis, UTAR. Trevor, Richards (2015) Biochar - reversing the flow of carbon. UTAR Agriculture Science Journal. Wey, Goh Kai (2015) Elucidation Of The Roles Of y-Synuclein In The Invasiveness and Survival Of Colorectal Cancer Cell Line, LS 174T. Master dissertation/thesis, UTAR. Wong, Coong Mum and Lim, Mei Gee and Shum, Shen Hwei and Soh, Yan Qi and Yong, Lai Han (2015) Bank-specific and macroeconomic determinants of bank's profitability: A study of commercial banks in Malaysia. Final Year Project, UTAR. Wong, Hong Mun (2015) Community financial portfolio management system. Final Year Project, UTAR. Wong, Nicholas Weijian and Loh, Zi Hung and Lim, Saw Nee and Lam, Wen Jian and Lim, Adrian Thuan Ern (2015) Impact of macroeconomic variables on manufacturing sector growth in Malaysia. Final Year Project, UTAR. Wong, Shi Yee and 黄, 诗怡 (2015) 整全神学训练,服事教会社会——以《砂拉越诗巫卫理神学院》为个案研究. Final Year Project, UTAR. Wong, Ting Yi (2015) Contactless Heart Rate Monitor for Multiple Persons in a Video. Final Year Project, UTAR. Wong, Win Liang and Chua, Chon Wee and Goh, Wan Zhi and Yeik, Michael Han Chen and Chew, Eva (2015) Linkage between Malaysian economic growth and energy consumption: The role of technology. Final Year Project, UTAR. Woo, Wing Hong (2015) Student Attendance Recording System. Final Year Project, UTAR. Yap, Bryan Chun Yung and Chan, Wen Qing and Chua, Yuen Yee and Goh, Sing Kian and Tong, Yew Hoong (2015) Impact of internal factors in measuring profitability of local and foreign banks: evidence from 16 Malaysia commercial banks. Final Year Project, UTAR. Yap, Kwai Chin and 叶, 桂晶 (2015) 从《快园道古》看张岱的“谐谑”内蕴. Final Year Project, UTAR. Yap, Mei Yee and Ching, Hoon Ming and Tan, Xian Zheng and Wong, Shiau Suang and Wong, Yee Ling (2015) Determinants of executive directors' remuneration in Malaysia. Final Year Project, UTAR. Yap, Ming Zhe (2015) Culture assessment of the bacterial quality of air in the food preparation areas of a cafeteria and characterisation of the gram-positive bacterial species isolated. Final Year Project, UTAR. Yee, Soo Meng (2015) Corporate Social Responsibilities Disclosure Of Listed Companies In Malaysia. Master dissertation/thesis, UTAR. Yehdish, Makhan Lal (2015) Investigation of the Optical Arrangement in Radio Telescopes. Final Year Project, UTAR. Yeoh, Keat Liang (2015) Virtual personal bookshelf system (VPS). Final Year Project, UTAR. Yeoh, Yui Shan and 杨, 育珊 (2015) 论黄锦树《鱼》的叙事策略. Final Year Project, UTAR. Yeow, Sze Min and 尤, 思敏 (2015) 从王安石学记文体论其早期教育观念(1042-1067). Final Year Project, UTAR. Yim, Elaine (2015) Malaysia’s Flower Show – Floria 2015. UTAR Agriculture Science Journal. Yim, Joanne Sau Ching (2015) Job Satisfaction and Cynicism Towards Changes In Education Among School Teachers In Kinta Selatan District. Master dissertation/thesis, UTAR. Yin, Goh Sze (2015) Factors Influencing Life Insurance Consumption. Final Year Project, UTAR. Yiong, Trancy Hung Yii and 杨, 凤茹 (2015) 诗巫福州红酒文化研究. Final Year Project, UTAR. Yong, Ann Li (2015) Antibacterial and bioactivity analysis of selected medicinal plants and their effects on bacterial protein expression profiles. Master dissertation/thesis, UTAR. Yong, Jing Wen and Low, Jun Xian and Lee, Reagan Hon Leong and Wong, Kah Chun and Wong, Suk Yee (2015) Comparison between performance in sukuk and conventional bond in Malaysia. Final Year Project, UTAR. Yong, Kah Wah (2015) Infrared-based text entry system for handicap. Final Year Project, UTAR. Yong, Siew Meng (2015) The Influence of Various Facets of Ethical Behavior on Employees Job Satisfaction and Organizational Citizenship Behavior. Master dissertation/thesis, UTAR. Yong, Teck Wai and Chong, Lee Ming and Looi, Po Wai Kevin and See, Jie Hui and Yew, Suet Yee (2015) Determinants of demand on life insurance in Perak. Comparison between rural and urban areas. Final Year Project, UTAR. Yuan, Bing (2015) Factors Influencing Customers Satisfaction In Online Shopping. Master dissertation/thesis, UTAR. Yuan, So Soon (2015) The Impact of Organizational Justice Towards Employee Job Satisfaction In Malaysia. Final Year Project, UTAR. Yubenraj, Ramakrishnan (2015) A Review Study of Oil and Gas Production Facility for Semi-Submersible Platform. Final Year Project, UTAR. Yun, Huan Bin (2015) Energy Audit on Faculty of Engineering and Green Technology in Universiti Tunku Abdul Rahman. Final Year Project, UTAR. This list was generated on Sun Apr 21 22:17:25 2019 MYT.
2019-04-21T16:47:35Z
http://eprints.utar.edu.my/view/year/2015.default.html
Nano paint protection for autos is the most effective feasible type of finish for your car s surface area because not just does it keep the luster of your paintwork gleaming as though it has merely left the showroom, but it is also self-cleaning. The process behind it is interesting and is based upon something from nature. It is called the lotus effect considering that it was found that the lotus blossom has self-cleaning homes on its leaves. This likewise applies to some other plants such as walking cane, nasturtium and irritable pear. Additionally, some pests such as specific butterflies and dragonflies have the same capacity. When Was The Lotus Effect Found? Scientists first began to learn this phenomenon in 1964, and the job was further established by Barthlott and Ehler in 1977; it was they which first created the term the lotus result. The fallen leaves of the lotus flower have an incredibly high water repellent property which is called super-hydrophobicity. When it rainfalls, water droplets roll across the fallen leaves and get dust, removing it from the area, hence enabling the plant to remain tidy and the fallen leaves to do their function of photosynthesis to enable the plant to increase. The high area tension of a water droplet means that it has a tendency to decrease its area in a venture to attain a form which is as close to a round as feasible. On making contact with a surface, forces of attachment cause the surface area to come to be damp. The surface might become partially wet or completely moist and this will certainly depend on the fluid stress of the water droplet and the sticky attributes of the area. The much less of the water droplet that touches with the surface area, the greater that surface area s hydrophobicity is said to be. This hydrophobicity can be determined by the contact angle of the water droplet on the surface. The lesser the contact angle, the reduced the hydrophobicity and the other way around. If a call angle on a certain area is much less than 90 degrees the area is described as hydrophilic. Greater than 90 levels it is hydrophobic. Some plants have a get in touch with angle of as long as 160 levels which means that only around 2 % of the water droplet touches with the area. When it come to lotus leaves, the contact angle is as higher as 170 degrees. These surface areas are claimed to be super-hydrophobic. The area of a water droplet in contact with a lotus leaf may be as litlle as 0.6 %. Just how Does Dirt Acquire Cleaned Off The Area? When filth gets on to such a surface the quantity of adhesion in between the gunk particle and the surface is far less than on other surfaces. When a water droplet rolls across the area the quantity of adhesion between the surface and the filth particle is much less than that between the dust fragment and the droplet, so the dust particle is picked up by the droplet and carried away causing automated cleaning of the surface area. This only jobs considering that of the high level of surface stress of a water droplet and does not function in the same method with natural solvents. Basically, that is exactly how it helps the lotus fallen leave. Specifically the very same concept is utilized in nano paint technology for car paint protection Melbourne. How Does Nano Paint Protection Help Vehicles? Nano innovation has actually progressed to the factor whereby a difficult safety ceramic finish could be applied to the lacquered completed area of the vehicle s paintwork when it leaves the factory. The lacquered surface area is not immune to bird droppings, UV, or chemical etching and could be quickly damaged or scratched. When this happens the only option is to polish off the scratch marks or swirl marks thereby lowering the density of the manufacturing facility paint layer. With time this implies that at some point a respray will be required. Many people will utilize a wax gloss or a polymer paint sealer on the paintwork, yet this still leaves the paint surface area susceptible to harm from square one, bird droppings and so forth, as these could penetrate the gloss or sealer. Exactly what Is the Answer? The solution is to use a difficult nano paint technology finish to the surface area of the paintwork. The finish is far much less prone to harm than other area finishes, yet even if swirl marks or other harm should take place the layer itself could be polished off and re-applied. This implies that the manufacturing plant paintwork will not be damaged and will preserve its thickness and shine. Essentially nano modern technology copies the effects found in nature in a general way and super-hydrophobic finishes have been utilized on man made areas for a considerable number of years. One such application is self-cleaning glass used for home windows, however they have actually been used in different other applications. As an example, super-hydrophobic layers incorporating Teflon fragments have been used on medical diagnostic slides. The very same innovation has actually been made use of for ordeals as varied as roofing system floor tiles and natural leather furniture. It can be made use of on satellite television dishes, for instance, in order to reduce the possibility of rainfall discolor and to combat any kind of accumulate of ice and snow on the antenna. It has actually even been used for plant planting. The Groasis Waterboxx is a device made for planting little plants in locations of extreme dry spell which enables the youthful sapling to have adequate water reach its root systems up until the root systems go down far enough into the ground to get to water. The Waterboxx could then be eliminated and made use of again elsewhere. Nano technology makes it possible for the Waterboxx to harvest condensation and condensation and funnel it down to the origins of the sapling even in locations of desert. In order to safeguard the paintwork of your auto and preserve that fresh out of the display room appearance with none of the effort, nano paint protection is the answer for the discriminating driver. You will never ever have to worry about scratch marks or bird droppings, and your auto will simply require a quick rinse to eliminate any dust. It doesn t issue whether your car is all new, or a number of years old; we can keep it the means it is, or recover it to the way it was. You will likewise have the ability to laugh at your neighbors washing and brightening every Sunday morning! Car paint protection is essential in recovering your auto paint to its previous glory. It likewise shields it so you vehicle has excellent appearance for many years to come. Lots of vehicle hygiene items already existing in the marketplace today and all of them assert to offer defense to your vehicle paint. However the honest truth is that not all these products are the same, the same way not all cars are comparable. While every car could essentially obtain you from factor X to Y, there is still a massive distinction in between auto designs. The exact same puts on automobile wax, paint protection and polishes. Every one of these products give certain amount of luster, however that is as comparable as they can acquire. In this article, we enlighten you, whether you are a new automobile owner or a not-so-new one, on the essential truths regarding paint protection products that are available in the marketplace. This way, we believe you could make the appropriate choice when picking the best defense for your precious vehicle. Obviously they are not. There are many kinds of paint protection items and they have varying quality and price. Nevertheless, when purchasing car paint protection Melbourne people need to not make the mistake of basing their decision entirely on the price of that security item. Rather, your decision on the sort of defense you pick need to be notified by exactly what it is that you wish to attain. A product that supplies protection against UV rays, bird going down road salt, acid rain and bug remains all rolled into one is certainly a lot more costly than an item that only supplies short-lived luster. Another ordeal is that different products provide varying degrees of sparkle. If you would like to purchase an item that provides you a much longer shine yet needs less upkeep, be prepared to pay a couple of extra bucks for it. Bulk of protectants that are readily available out there currently give just minimal quantity of protection versus the aspects discussed above. Likewise, the majority of them do not supply enduring sparkle and need reapplication. It is quite vital that you be mindful of just what you decide on for your car. Can car paint protection assistance preserve your vehicle s value along with resale worth? The paint work and appearance of an automobile will certainly help in preserving not simply its value but additionally resale value. A vehicle that is well preserved with a mirror finish paint, has actually a boosted resale value. It additionally saves you money and time now. With a great paint protection, a lot of time is reduced cleaning it given that gunk and dirt are quickly removed. It succeeded t require polishing to keep its look. What are several of things you can expect from a great car paint protection? One of the main advantages of car paint protection is that it includes genuine value to the vehicle. A proper paint protection application can give your auto amazing glass shine in addition to maintain its value. For these factors, folks are usually prepared to part with $1000 just to get good paint protection. When done properly, there will certainly be less waxing and the cleansing will be much easier in case your vehicle acquire grimy and should be cleaned. This translates to a lot more savings in future. Can your car s paintwork be ruined by simply bird going down? The answer is of course. Opportunities that your auto is obtaining harmed daily without you most likely seeing are really genuine. Lots of people only think that the best threat to their automobile s paintwork is UV rays. While this is true, the harm follows numerous years. Bird droppings are nevertheless a lot more instant risk. They create damages in simply a concern of days. Bird droppings, as you know, are the item of a bird s digestive device. Without going way too much into the field of biology, droppings can have high degrees of acids which could ruin the paintwork. Many people are stunned by the amount of damages that could result from a bird dropping. While it can go undetected to an untrained eye, a specialist that understands exactly what he wishes in auto will quickly see it. Is automobile waxing the very best option? Although vehicle shaving is known to provide immediate shine, it is not the most effective solution. The factor it is called wax is since it is made of wax. And as you know, polish will certainly thaw when subjected to heat. When polished paint is used on your automobile, it ends up being soft when subjected to heat. This loosens shine as well as makes your automobile more vulnerable to alluring contaminants. It is terrific for show automobiles considering that these do not sit in the sunlight for an entire day everyday. Also, by its extremely attributes, wax rarely adheres to the auto surface. Wax can not attach well to any type of surface. Just attempt sticking wax to any kind of surface area and you will certainly see this. In the same way, wax at some point washes off of your vehicle, which will leave your automobile with much less or no protection in any way. Exactly what are the many others points that you need to know about car paint protection? The need to appropriately look after your auto, that is, specifying and washing, can not be overemphasised. Picking trusted vehicle clean electrical outlets and detailers is not simply important however additionally secures your automobile from damage. In short, manage the paintwork of your auto similarly you would certainly care for your skin. Anything that succeeded t damage your skin succeeded t damage your vehicle s paintwork. An additional essential point is a first class car hair shampoo. This reduces surface area scraping that arise from rubbing when the car is being washed. You likewise require a soft stack wash mitt or sponge and it must be of high quality. If you really want a streak-free drying, you have no option but to insist on a terry towel or a leather chamois to dry your vehicle. Just like anything else, you simply acquire exactly what you have actually paid for with car paint protection. It is crucial to decide on the right place to clean or specific your vehicle. This need to be assisted by the span of time it takes to clean your vehicle safely and appropriately. Hopefully you have found this article helpful about car paint protection Melbourne. Visit this site again for more information about car paint protection Brisbane. Exactly how Does Nano Paint Protection Help Autos? Nano paint protection for vehicles is the best possible type of surface for your motor vehicle s area since not simply does it keep the sparkle of your paintwork gleaming as though it has simply left the display room, yet it is likewise self-cleaning. The procedure behind it is exciting and is based upon something from attributes. It is called the lotus impact due to the fact that it was discovered that the lotus flower has self-cleaning properties on its leaves. This also puts on a few other plants such as walking cane, nasturtium and irritable pear. Furthermore, some insects such as specific butterflies and dragonflies have the very same ability. When Was The Lotus Result Uncovered? Experts initially started to learn this sensation in 1964, and the work was additional established by Barthlott and Ehler in 1977; it was they which first created the term the lotus result. The fallen leaves of the lotus flower have an extremely higher water repellent home which is called super-hydrophobicity. When it rains, water droplets roll throughout the fallen leaves and get gunk, eliminating it from the surface, therefore making it possible for the plant to remain clean and the leaves to perform their function of photosynthesis to enable the plant to increase. The higher area stress of a water droplet suggests that it has a tendency to reduce its surface area in a venture to obtain a shape which is as close to a sphere as possible. On making contact with an area, forces of bond cause the surface area to become moist. The surface might end up being partially damp or entirely wet and this will certainly rely on the fluid strain of the water droplet and the sticky nature of the surface area. The less of the water droplet that touches with the area, the greater that surface s hydrophobicity is stated to be. This hydrophobicity can be gauged by the get in touch with angle of the water droplet on the surface area. The reduced the get in touch with angle, the reduced the hydrophobicity and the other way around. If a call angle on a certain area is less compared to 90 degrees the surface is described as hydrophilic. More than 90 degrees it is hydrophobic. Some plants have a call angle of as high as 160 levels meanings that only about 2 % of the water droplet touches with the surface area. When it come to lotus leaves, the get in touch with angle is as high as 170 degrees. These surfaces are stated to be super-hydrophobic. The area of a water droplet touching a lotus fallen leave might be as litlle as 0.6 %. How Does Filth Get Washed Off The Surface area? When dirt gets on to such a surface the amount of bond between the dirt particle and the area is much much less than on other surfaces. When a water droplet rolls across the surface area the quantity of attachment in between the surface and the gunk particle is much less compared to that between the dust fragment and the droplet, so the filth fragment is gotten by the droplet and carried away causing automatic cleaning of the surface. This only works considering that of the higher level of surface area tension of a water droplet and does not function in the very same means with organic solvents. Essentially, that is just how it works for the lotus fallen leave. Specifically the exact same concept is made use of in nano paint modern technology for car paint protection Melbourne. Nano modern technology has actually progressed to the factor where a challenging safety ceramic covering can be put on the lacquered finished surface of the car s paintwork when it leaves the manufacturing facility. The lacquered area is not insusceptible to bird droppings, UV, or chemical etching and can be easily ruined or scraped. When this happens the only option is to brighten off the scrape marks or swirl marks therefore decreasing the density of the manufacturing facility paint layer. Over time this implies that at some point a respray will certainly be required. Many people will certainly utilize a wax polish or a polymer paint sealer on the paintwork, yet this still leaves the paint surface area vulnerable to damages from square one, bird droppings and so forth, as these could permeate the polish or sealant. The solution is to use a tough nano paint innovation covering to the surface area of the paintwork. The finish is far less at risk to damages compared to other area layers, however even if swirl marks or other damages ought to happen the finish itself could be brightened off and re-applied. This means that the manufacturing facility paintwork will not be harmed and will certainly preserve its density and shine. Essentially nano innovation duplicates the impacts located in nature in a basic means and super-hydrophobic coverings have actually been utilized on man made surfaces for a significant variety of years. One such application is self-cleaning glass made use of for home windows, but they have actually been used in numerous other applications. For example, super-hydrophobic finishes incorporating Teflon fragments have actually been utilized on clinical diagnostic slides. The very same modern technology has actually been utilized for ordeals as diverse as roofing system ceramic tiles and leather-made upholstery. It could be used on satellite television dishes, for instance, in order to decrease the probability of rainfall fade and to neutralize any type of accumulate of ice and snow on the antenna. It has actually even been used for plant planting. The Groasis Waterboxx is a gizmo developed for growing tiny trees in locations of severe dry spell which enables the youthful sapling to have enough water reach its origins up until the origins decrease much sufficient into the ground to get to water. The Waterboxx can then be taken out and made use of again in other places. Nano technology allows the Waterboxx to collect dew and condensation and channel it down to the roots of the sapling even in locations of desert. In order to safeguard the paintwork of your auto and keep that fresh out of the display room look with none of the hard work, nano paint protection is the response for the critical motorist. You will certainly never have to stress over scratch marks or bird droppings, and your auto will only need a fast rinse to take out any type of filth. It doesn t concern whether your automobile is brand-new, or several years of ages; we could keep it the way it is, or recover it to the means it was. You will certainly likewise have the ability to laugh at your neighbors cleaning and polishing every Sunday early morning! Hopefully you have found this article helpful about car paint protection Melbourne. Visit this site again for more information about paint protection Sydney. You are in the automobile dealerships showroom. You have just accepted acquire a brand new vehicle. You are happy that you have actually discussed a fantastic cost and you have trembled practical the take care of the salesman. He welcomes you to sit down in order to fill out the documentation. Nonetheless, just before he doings this he starts speaking with you concerning car paint protection. You re assuming Hold on. I ve just gotten an all new automobile. Why does it require its paint securing? Is there something wron g with it? You may be eased to recognize that there is absolutely nothing wrong with the paint on your shiny brand-new car. However, when you drive it from the showroom it is going right out into the Australian weather condition and, unless you have a garage, that is where it is going to stay up until either you offer it, or it winds up of its life. Safeguarding vehicle paint on new vehicles is just sound judgment. Why Does My Vehicle Need Protection From The Weather condition? There are two or three ordeals that endure could do to your vehicle s paint. The ultra violet rays of the sun could trigger oxidation and early fadng of the paint in a comparable style to the harm they can do to your skin. The sun in Australia could obtain very warm and, as compared to a cloudy country like Germany, for instance, radiates for many more hrs every year. There most absolutely is. Birds. A basic bird dropping can create damage to your paintwork within an issue of a few days. Without entering as well graphic organic specific, bird droppings originate from the digestive device of birds and often consist of high amounts of acid which, naturally, will ruin the paint. Quite commonly, you could just not observe bird droppings, or you could notice them and believe to on your own that you will certainly wash them off at the weekend break whereby time the damage might have been done. If all that wasn t sufficient there is then the little matter of damage created by particles rocks, grit and so on- regurgitated by many others cars on the road. It isn t an issue of possibly your paint will obtain damaged, it is merely a question of when. A huge number of windscreens are ruined by flying items yearly, yet far more stones will strike the front of the automobile. You could be unfortunate and get your first paint chip a mile from the display room! Shielding automobile paint on brand-new automobiles follows the well held theory that deterrence is best compared to treatment. There are several perks, not the least which is that a vehicle with excellent paintwork is going to bring a much much better cost when it comes to time for offering it on and purchasing a brand-new one. There is absolutely no question that wax will certainly offer your new automobile a great shine. However, auto wax is called that considering that it is mostly made up of wax. As everyone knows, wax thaws in warmth. The warmer it acquires, the faster the wax melts. Under the warm Australian sun the wax is going to thaw quicker instead of later, which means that it will shed luster and lean to catching dirt and many others pollutants. Can I Apply Paint Protection Products Myself? You can. Nevertheless, similar to many ordeals in life, you are far best off getting the job done properly, visit this site for someone to do that for you. To begin with, if you use paint protection on your own you will not get any kind of service warranty for the simple factor that the producer of the product you are using doesn t being aware whether you will apply it properly. One really popular supplier that offers a service warranty on the product particularly states that their item has actually to be applied by an accredited installer or their guarantee is void. Sure. Just what you are acquiring is an automobile that is going to search in better compared to display room health condition at the same time you own it. You gained t have to wax or brighten it. Washing is quicker and easier. When you involve market it you will certainly acquire a far better price for it given that it still looks best. You couldn t really request for a lot more from any kind of item. You are in the car suppliers showroom. You have simply accepted purchase an all new automobile. You are happy that you have bargained a wonderful price and you have actually trembled hands on the handle the salesman. He welcomes you to sit down in order to fill out the documents. Nonetheless, prior to he doings this he starts talking to you about car paint protection. You re assuming Hold on. I ve merely acquired an all new vehicle. Why does it need its paint shielding? Is there glitch with it?. Why Does My Auto Required Security From The Weather condition? There are 2 or three ordeals that survive can do to your car s paint. First, the ultra violet rays of the sunlight could cause oxidation and untimely fadng of the paint in a similar style to the damage they can do to your skin. The sun in Australia can obtain extremely warm and, compared to an over cast nation like Germany, for instance, shines for a lot more hrs every year. OK, I Can See That. Anything Else? There most absolutely is. Birds. A simple bird dropping can create harm to your paintwork within a matter of a few days. Without going into too visuals organic detail, bird droppings come from the intestinal device of birds and often consist of higher quantities of acid which, naturally, will certainly ruin the paint. Really usually, you could simply not observe bird droppings, or you may see them and think to on your own that you will certainly wash them off at the weekend break whereby time the damage may have been done. If all that wasn t sufficient there is then the little matter of damage triggered by particles rocks, grit and more- regurgitated by other motor vehicles on the road. It isn t a concern of maybe your paint will certainly obtain harmed, it is merely a concern of when. A substantial variety of windscreens are harmed by flying items each year, yet far more stones will certainly hit the front of the auto. You can be unlucky and obtain your first paint chip a mile from the display room! Shielding vehicle paint on new automobiles complies with the well held theory that prevention is better compared to cure. There are numerous advantages, not the least which is that a car with perfect paintwork is visiting fetch a much much better rate when it comes to time for marketing it on and purchasing a brand-new one. Why Should not I Simply Make use of Wax? It Would certainly Be Far Cheaper. There is absolutely no question that wax will certainly offer your new auto a wonderful luster. However, auto wax is called that considering that it is largely comprised of wax. As everyone beings aware, wax melts in heat. The hotter it gets, the faster the wax melts. Under the warm Australian sunlight the wax is visiting melt earlier as opposed to later, therefores that it will certainly shed luster and lean to trapping dirt and various other impurities. You can. As with several points in life, you are far best off obtaining the work done expertly. To start with, if you apply paint protection on your own you will certainly not obtain any sort of guarantee for the straightforward factor that the maker of the item you are using doesn t understand whether you will apply it appropriately. Actually, one very widely known manufacturer that provides a guarantee on the product specifically states that their product needs to be used by an accredited installer or their warranty is void. The Car I Am Acquiring Has a 10 years Guarantee On The Paintwork. Simply Inform Me The Perks of Paint Protection Again. Sure. Just what you are getting is an automobile that is going to search in better than showroom health condition all the while you possess it. You succeeded t have to polish or polish it. Washing is quicker and easier. When you involve market it you will certainly get a much better cost for it due to the fact that it still looks excellent. You couldn t really request for much more from any kind of item. The automobile detailing industry has actually grown by leaps and bounds from the moment when polishes given the most effective sparkle complied with by sealants that gave luster as well as long life. A reasonably new industry of chemistry has actually caused higher developments in surface care in current times, in the form of nano paint protection, that is proven to offer much exceptional outlining compared to waxes and sealers. Exactly what Is Nano Paint Protection? Nano paint protection utilizes nanotechnology to supply covering options for vehicle physical bodies, windscreens, chrome surfaces, rims, lights, underbody and corrosion protection etc. It additionally gives liquefied repellent security for upholstery and seats. The purpose of the modern technology is to supply much better, much longer long-term luster, increase security when driving in inadequate weather condition and extend automobile clean patterns. With the help of nano-based sealants, paintwork is usually secured by a layer of changed, hard-as-glass fluorocarbon nano fragments. The coating is indicated to freshen up colors, ward off gunk and deal superb water-repelling abilities, which is the highlight of this sort of layer. Considering that the nano security is an added layer of hard finish over paint, it can simply be removed by scrape. It likewise normally safeguards versus light scrapes and swirl marks that could happen at the automobile wash. Nano layer for vehicle bodyworks repel filth, water, oils, lifeless pests and many others contaminants that boost the requirement for normal cleaning. It increases weather condition resistance, water resistance, resistance to deterioration or even shields the paint from UV rays. Protection provided by bodywork layer is implied to keep the physical body paint imperfection cost-free for longer and lower the frequency of cleaning. This technology is typically advised for new vehicles, not older than 5 years. Nano-based rim sealers are meant to protect chrome or alloy rims from the staining result of brake dust. Rims could stay cleaner for much longer as a result of the finish’s water and oil repelling homes. Sediments and filth can be wiped with simply a damp sponge. Steel components like grills, bumpers, mirror covers, and slats and so on lean to tarnishing from fingerprints, filth and other contaminants. These steels can be secured with nano chrome defense coatings to make them water repellent. They could additionally be cleansed as required with a moist sponge. Many detailers offer nanotechnology-based anti-fog defense that stops harmful mist from forming on windows in autumn and winter season. These coatings are meant to improve nighttime look at, regardless of the glow from oncoming website traffic.. Windscreen protection usually uses hydrophobic (water-repelling) nano covering implied for glass surface areas. This coating fends off rain drops and leaves the windscreen dry even in massive rainfall, restricting the use of windshield wipers in such risky health conditions. Does Paint Protection For Cars Actually Function? Tests have actually disclosed that nano paint protection is far superior to the normal sealers that vehicle proprietors have actually been made use of to until now. While routine sealants normally have to be re-applied every 4 to 5 months, nano paint protection has been discovered to last in between 9 months to 5 years, depending on health conditions. The coatings function as true barriers on the surface area, unlike a temporary barrier supplied by sealant or wax. Suppliers often offer guarantees of 5 years for their nano paint protection solutions. Vehicle owners that have made use of nano coating have actually viewed just what is called the lotus effect. Just as water droplets (and filth pollutants) are fended off by a lotus leaf, the complex nanoscopic homes of the nano layer reduces the tendency of water droplets to abide by the area. The self-cleaning residential property of the lotus leaves (and those of various other plants) has actually influenced scientists to establish a number of likewise behaving products. Nano paint layer is just one of them. It has actually extensively revealed the capability to repel water and keeping gunk from staying with the surface, and is good to go to be the car paint protection formula of the future. The amount of Does Nano Paint Protection For New Cars Really Expense? Nano paint protection is not cheap. It is supplied at a range of costs depending on the dealer or the detailer issuing it. It can cost anywhere in between $300 and $400 (at a local detailer) or up-wards of a $1000 dollars for more comprehensive plans at expert automobile sellers. If you’re purchasing the full bundle that lots of sellers offer with new vehicles, it could also cost you over $2000 together with application, and you will have your coated auto provided to you. Is Paint Protection For Cars Well worth The Cost? As detailed over, the expense of nano coating at face value is never equivalent to the price of polishing or normal sealers. However, the long-lasting cost advantages of the finish balanced out the initial price for lots of people. Nano coating is most definitely valuable for those that have to invest hundreds of dollars every couple of months to obtain their car cleansed and their tires rubbed to get rid of persistent brake dust blemishes. Vehicle proprietors can discuss the rates with their detailer, pick the whole package of indoor and outside nano covering or choose the individual covering options they prefer. Can I Apply Nano Paint Protection To My Very own Car? Some nano layer suppliers make their products readily available just to professional detailers, check their website. These coatings are easy to apply inaccurately, reducing the long life of the protection they give. Some other layers can be purchased the auto merchants where the brand-new car was acquired and used in the house. It is highly advised that when applying at your home, you need to follow the directions very closely and make certain that the working area is completely free of cost of dust, dust and impurities.. If you want availing the advantages of nano covering when buying a brand-new car, you must look around at local detailers to contrast prices before you determine to have the car retailer apply the coating for you. The automobile outlining sector has actually grown by leaps and bounds from the moment when waxes provided the best shine complied with by sealers that offered sparkle as well as durability. A reasonably new field of chemical make up has actually caused greater innovations in surface area hygiene in recent times, in the form of nano paint protection, that is verified to supply far remarkable outlining compared to waxes and sealers. Nano paint protection utilizes nanotechnology to give layer remedies for car bodies, windscreens, chrome areas, rims, headlights, underbody and corrosion protection and so on. It likewise provides liquid repellent security for furniture and seats. The function of the innovation is to provide much better, much longer long lasting sparkle, rise safety and security when driving in bad weather and expand car clean cycles. With the help of nano-based sealers, paintwork is typically safeguarded by a layer of changed, hard-as-glass fluorocarbon nano particles. The layer is suggested to freshen up shades, repel dirt and deal exceptional water-repelling capacities, which is the highlight of this sort of coating. Considering that the nano defense is an additional layer of tough finish over paint, it can only be gotten rid of by abrasion. It likewise usually safeguards versus light scratches and swirl marks that could take place at the vehicle clean. Nano finish for car bodyworks repel dirt, water, oils, dead bugs and various other pollutants that improve the requirement for normal cleaning. It raises weather resistance, water resistance, resistance to rust and even shields the paint from UV rays. Security offered by bodywork coating is indicated to keep the physical body paint imperfection free of cost for longer and decrease the regularity of cleaning. This technology is generally suggested for brand-new vehicles, not more mature compared to 5 years. Nano-based rim sealants are suggested to protect chrome or alloy rims from the staining effect of brake dust. Rims could remain cleaner for longer due to the coating’s water and oil repelling residential properties. Sediments and filth could be cleaned off with merely a damp sponge. Steel components like grills, bumpers, mirror covers, and slats and so on lean to discoloring from finger prints, dirt and various other pollutants. These steels can be sealed with nano chrome defense finishes to make them water repellent. They can also be washed as needed with a damp sponge. Nano Anti Fog For Home windows Numerous detailers provide nanotechnology-based anti-fog protection that stops dangerous mist from basing on windows in fall and wintertime. These finishes are implied to improve nighttime sight, despite the glare from oncoming website traffic.. Windscreen defense generally makes use of hydrophobic (water-repelling) nano layer meant for glass surface areas. This coating repels rainwater decreases and leaves the windshield dry also in hefty rain, limiting using windshield wipers in such unsafe problems. Does Paint Protection For Cars In fact Work? Tests have actually exposed that nano paint protection is much superior to the routine sealants that automobile proprietors have actually been utilized to so far. While regular sealants commonly have to be re-applied every 4 to 5 months, nano paint protection has been located to last in between 9 months to 5 years, depending on disorders. The coatings function as true obstacles on the surface, unlike a short-lived barrier provided by sealer or wax. Makers usually provide warranties of 5 years for their nano paint protection solutions. Automobile proprietors that have used nano finish have actually viewed just what is called the lotus impact. Equally water droplets (and filth contaminants) are driven away by a lotus leaf, the complex nanoscopic properties of the nano coating minimizes the tendency of water droplets to adhere to the area. The self-cleaning home of the lotus leaves (and those of other plants) has influenced experts to create a variety of similarly acting materials. Nano paint layer is among them. It has actually commonly shown the ability to repel water and keeping gunk from adhering to the surface, and is good to go to be the car paint protection formula of the future. The amount of Does Nano Paint Protection For New Cars Actually Price? Nano paint protection is not affordable. It is offered at a selection of prices depending upon the supplier or the detailer issuing it. It could cost anywhere between $300 and $400 (at a neighborhood detailer) or up of a $1000 dollars for additional inclusive packages at professional car stores. If you’re getting the full package deal that several sellers supply with brand-new vehicles, it can likewise cost you over $2000 in addition to application, and you will certainly have your covered auto supplied to you. As clarified over, the cost of nano finish at face value is by no means similar to the cost of polishing or routine sealants. Nonetheless, the long-lasting cost perks of the layer offset the initial cost for many people. Nano coating is absolutely worthwhile for those that have to spend hundreds of dollars every few months to obtain their vehicle cleaned and their tires scrubbed to take out stubborn brake dirt blemishes. Auto proprietors can negotiate the prices with their detailer, choose the entire bundle of indoor and outside nano finish or choose the individual finish options they favor. Can I Use Nano Paint Protection To My Very own Car? Some nano covering manufacturers make their products readily available simply to expert detailers. These layers are easy to apply inaccurately, lessening the longevity of the defense they supply. Other coatings can be purchased the car stores where the new auto was bought and applied in the house. However, it is extremely recommended that when using at home, you ought to follow the instructions carefully and see to it that the working area is completely devoid of dust, dust and impurities.. If you are interested in getting the benefits of nano covering when acquiring a brand-new auto, you must look around at neighborhood detailers to contrast prices before you choose to have the car store use the layer for you. You are in the automobile suppliers display room. You have actually just accepted buy a brand-new auto. You are happy that you have actually discussed a fantastic price and you have trembled hands on the handle the salesperson. He invites you to sit down in order to fill in the documents. Prior to he does so he starts talking to you about car paint protection. You re assuming Hang on. I ve simply purchased an all new vehicle. Why does it require its paint securing? Exists glitch with it? Why Does My Car Required Protection From The Weather condition? There are two or three points that survive could do to your auto s paint. Initially, the ultra violet rays of the sun can cause oxidation and early fadng of the paint in a similar style to the damage they could do to your skin. The sun in Australia can obtain exceptionally hot and, as compared to a gloomy country like Germany, as an example, beams for much more hrs annually. Acid rain will certainly additionally influence the paint surface area. A hailstorm could do damage. If you live near the shore, you will certainly commonly find on a gusty day that the automobile is covered with salt which has been blown off the sea. If you park near the water s edge your car can obtain covered in sea spray. You can see some sample cars here. There most definitely is. Birds. A basic bird going down can create damage to your paintwork within a matter of a couple of days. Without going into also visuals organic specific, bird droppings come from the digestion system of birds and often include high quantities of acid which, of course, will harm the paint. Quite commonly, you might merely not see bird droppings, or you could notice them and think to yourself that you will certainly wash them off at the weekend break by which time the damage might have been done. If all that wasn t enough there is then the little matter of damage created by debris stones, grit and more- regurgitated by many others motor vehicles on the road. It isn t a concern of maybe your paint will get ruined, it is just a question of when. A big variety of windshields are damaged by flying things each year, yet much more rocks will hit the front of the auto. You might be unfortunate and acquire your initial paint chip a mile from the showroom! Shielding car paint on new automobiles adheres to the well held theory that deterrence is far better compared to treatment. There are several perks, not the least which is that a car with perfect paintwork is visiting retrieve a far best price when it involves time for offering it on and getting a brand-new one. Why Should not I Merely Use Wax? It Would Be Far Cheaper. There is definitely no doubt that wax will certainly provide your brand-new automobile a terrific luster. Nonetheless, vehicle wax is called that due to the fact that it is greatly composed of wax. As every person recognizes, wax thaws in warmth. The hotter it acquires, the much faster the wax melts. Under the hot Australian sunlight the wax is going to melt earlier rather than later, meanings that that it will lose luster and be prone to catching dust and many others contaminants. You can. Nevertheless, as with several things in life, you are much better off obtaining the job done skillfully. To begin with, if you apply paint protection yourself you will certainly not acquire any type of service warranty for the straightforward reason that the supplier of the product you are making use of doesn t know whether you will use it correctly. One quite widely known supplier who gives a service warranty on the item specifically specifies that their product has to be used by an accredited installer or their guarantee is void. The Car I Am Acquiring Has a Ten Year Guarantee On The Paintwork. Simply Tell Me The Benefits of Paint Protection Again. Sure. What you are acquiring is a car that is visiting look in better compared to display room disorder all the while you possess it. You won t have to polish or polish it. Washing is quicker and simpler. When you involve market it you will certainly obtain a much better price for it due to the fact that it still looks best. You couldn t really request far more from any type of product. The auto outlining industry has actually grown by leaps and bounds from the time when waxes provided the most effective sparkle adhered to by sealers that gave shine in addition to long life. A fairly new industry of chemistry has actually brought about higher improvements in area treatment in current times, in the form of nano paint protection, that is proven to offer far premium describing than waxes and sealers. Just what Is Nano Paint Protection? Nano paint protection uses nanotechnology to provide layer solutions for auto physical bodies, windscreens, chrome areas, rims, headlights, underbody and rust security etc. It likewise offers liquefied repellent protection for upholstery and seats. The purpose of the modern technology is to provide best, much longer lasting sparkle, boost safety when driving in bad weather and prolong automobile wash cycles. With the assistance of nano-based sealers, paintwork is generally safeguarded by a layer of customized, hard-as-glass fluorocarbon nano particles. The finish is indicated to refresh up colours, fend off dirt and deal exceptional water-repelling abilities, which is the emphasize of this kind of finish. Due to the fact that the nano defense is an added layer of hard finish over paint, it can simply be taken out by abrasion. It also typically shields versus light scratches and swirl marks that could occur at the auto clean. Nano layer for vehicle bodyworks fend off filth, water, oils, dead insects and many others impurities that enhance the requirement for normal cleaning. It boosts weather resistance, water resistance, resistance to rust and even protects the paint from UV rays. Defense supplied by bodywork layer is meant to keep the body paint mark cost-free for longer and decrease the frequency of cleansing. This technology is often recommended for new autos, not older compared to 5 years. Nano-based rim sealants are suggested to shield chrome or alloy rims from the tarnishing result of brake dust. Rims can stay cleaner for longer as a result of the covering’s water and oil repelling homes. Sediments and dirt could be cleaned off with merely a moist sponge. Steel elements like grills, bumpers, mirror covers, and slats etc. are prone to staining from finger prints, dust and other impurities. These steels could be sealed with nano chrome defense coatings to make them water repellent. They can likewise be washed as required with a moist sponge. Many detailers supply nanotechnology-based anti-fog defense that avoids dangerous mist from basing on windows in fall and wintertime. These finishes are meant to enhance nighttime see, despite the glow from oncoming quality traffic.. Windshield defense generally makes use of hydrophobic (water-repelling) nano layer suggested for glass surfaces. This coating wards off rain declines and leaves the windshield dry also in heavy rainfall, limiting the use of windscreen wipers in such hazardous conditions. Does Paint Protection For Cars Really Function? Tests have exposed that nano paint protection is much superior to the routine sealers that car owners have actually been utilized to so far. While routine sealers generally have to be re-applied every 4 to 5 months, nano paint protection has actually been found to last between 9 months to 5 years, depending on conditions. The finishes work as true barriers on the surface, unlike a momentary obstacle offered by sealant or wax. Producers often offer service warranties of 5 years for their nano paint protection support services. Vehicle owners that have actually made use of nano finish have actually seen just what is called the lotus effect. Equally water droplets (and dirt pollutants) are pushed back by a lotus leaf, the complicated nanoscopic residential properties of the nano covering reduces the tendency of water droplets to stick to the area. The self-cleaning property of the lotus leaves (and those of many others plants) has actually motivated researchers to develop a number of in a similar way acting materials. Nano paint covering is one of them. It has widely revealed the capacity to fend off water and keeping filth from adhering to the surface, and is ready to be the car paint protection formula of the future. Just how much Does Nano Paint Protection For New Cars In fact Cost? Nano paint protection is not cheap. It is supplied at a selection of costs relying on the dealership or the detailer issuing it. It can set you back anywhere in between $300 and $400 (at a regional detailer) or up-wards of a $1000 bucks for additional inclusive package deals at professional automobile retailers. If you’re getting the complete package that numerous stores supply with new vehicles, it could likewise cost you over $2000 along with application, and you will certainly have your covered car provided to you. Is Paint Protection For Cars Worth The Price? As detailed above, the price of nano finish at face value is never comparable to the cost of waxing or routine sealers. Nonetheless, the long-term cost advantages of the finish offset the preliminary price for many individuals. Nano coating is certainly rewarding for those that have to spend hundreds of bucks every few months to get their auto washed and their tires scrubbed to take out persistent brake dirt blemishes. Car owners could discuss the prices with their detailer, pick the whole package deal of interior and external nano layer or choose the individual coating alternatives they choose. Can I Apply Nano Paint Protection To My Own Auto? Some nano covering manufacturers make their items readily available only to expert detailers. These finishes are very easy to apply incorrectly, reducing the durability of the protection they offer. Other layers can be bought at the vehicle retailers where the new vehicle was purchased and applied in your home. However, it is highly advised that when applying in the house, you must adhere to the directions closely and ensure that the workplace is entirely free of dust, dust and contaminants.. If you have an interest in getting the perks of nano covering when acquiring a new automobile, you should look around at regional detailers to compare costs just before you choose to have the auto merchant use the layer for you. The automobile describing sector has expanded by leaps and bounds from the time when polishes given the most effective sparkle complied with by sealants that offered shine as well as durability. A reasonably new industry of chemistry has actually led to greater innovations in area hygiene in current times, in the form of nano paint protection, that is verified to provide far premium detailing than waxes and sealers. Nano paint protection uses nanotechnology to supply covering solutions for automobile physical bodies, windshields, chrome areas, rims, headlights, underbody and rust security and so on. It additionally offers liquefied repellent security for furniture and seats. The purpose of the innovation is to offer far better, longer enduring luster, boost safety and security when driving in bad climate and extend vehicle wash cycles. With the help of nano-based sealants, paintwork is normally secured by a coating of changed, hard-as-glass fluorocarbon nano particles. The layer is meant to refresh up shades, drive away dirt and deal outstanding water-repelling abilities, which is the emphasize of this sort of layer. Due to the fact that the nano protection is an extra layer of hard finish over paint, it can just be eliminated by abrasion. It additionally usually safeguards versus light scrapes and swirl marks that could happen at the auto clean. Nano covering for automobile bodyworks ward off dirt, water, oils, lifeless insects and many others impurities that improve the necessity for regular cleansing. It improves weather resistance, water resistance, resistance to rust or even secures the paint from UV rays. Protection provided by bodywork finish is indicated to keep the body paint imperfection totally free for longer and reduce the regularity of cleansing. This modern technology is usually advised for new vehicles, not older compared to 5 years. Nano-based rim sealants are suggested to secure chrome or alloy rims from the tarnishing result of brake dirt. Rims could remain cleaner for much longer due to the finish’s water and oil repelling properties. Sediments and dust can be cleaned off with merely a damp sponge. Steel aspects like grills, bumpers, mirror covers, and slats etc. are prone to staining from finger prints, gunk and many others pollutants. These metals could be closed with nano chrome defense coatings to make them water repellent. They could likewise be washed as needed with a damp sponge. Several detailers supply nanotechnology-based anti-fog protection that stops unsafe mist from basing on windows in autumn and winter season. These coatings are suggested to improve nighttime see, despite the glare from oncoming quality traffic.. Windshield security typically uses hydrophobic (water-repelling) nano covering suggested for glass surfaces. This coating fends off rainwater decreases and leaves the windscreen completely dry also in hefty rainfall, limiting the use of windshield wipers in such harmful conditions. Examinations have actually exposed that nano paint protection is far superior to the regular sealers that automobile owners have actually been made use of to up until now. While regular sealants usually have to be re-applied every 4 to 5 months, nano paint protection has been discovered to last in between 9 months to 5 years, depending upon disorders. The layers work as true barriers on the area, unlike a short-lived obstacle provided by sealant or wax. Producers usually offer service warranties of 5 years for their nano paint protection services. Vehicle owners that have actually used nano covering have actually viewed what is called the lotus impact. Just as water droplets (and filth contaminants) are fended off by a lotus leaf, the intricate nanoscopic residential properties of the nano layer decreases the propensity of water droplets to adhere to the area. The self-cleaning residential property of the lotus leaves (and those of various other plants) has actually motivated scientists to create a number of similarly acting materials. Nano paint covering is one of them. It has commonly revealed the capacity to ward off water and keeping filth from staying with the surface area, and is all set to be the car paint protection formula of the future. Nano paint protection is not economical. It is supplied at an assortment of costs relying on the supplier or the detailer providing it. It could set you back anywhere between $300 and $400 (at a neighborhood detailer) or up of a $1000 dollars for more comprehensive package deals at expert vehicle retailers. If you’re acquiring the complete plan that lots of retailers offer with new automobiles, it can additionally cost you over $2000 in addition to application, and you will certainly have your covered car provided to you. Is Paint Protection For Cars Really worth The Price? As described above, the expense of nano coating at face value is never comparable to the cost of polishing or normal sealants. Nevertheless, the long-term cost advantages of the coating countered the initial cost for many people. Nano covering is most definitely worthwhile for those that have to spend hundreds of dollars every couple of months to get their automobile cleaned and their tires scrubbed to get rid of persistent brake dust blemishes. Vehicle owners could discuss the prices with their detailer, pick the whole package deal of interior and external nano coating or choose the individual layer choices they choose. Can I Apply Nano Paint Protection To My Very own Vehicle? Some nano covering suppliers make their items readily available only to professional detailers. These coatings are easy to apply inaccurately, lowering the longevity of the protection they provide. A few other layers could be bought at the automobile merchants where the brand-new vehicle was acquired and used in your home. It is extremely advised that when using at residence, you must comply with the directions closely and make certain that the working location is totally free of dust, gunk and impurities.. If you want getting the advantages of nano layer when acquiring a brand-new car, you should look around at regional detailers to compare costs before you make a decision to have the auto retailer apply the finish for you. I trust you have found this article informative about car paint protection. Go ahead and check out this page for more details about paint protection Perth.
2019-04-24T06:41:10Z
https://carpaintprtction.wordpress.com/2014/09/
All through class I kept being astounded at how complex implementing cache coherency is. Even in our simple model, there are a large number of steps that need to be performed. Doesn't this negatively affect performance, and the benefits of using a cache in the first place? Specifically, how much slower is using a processor with cache coherence than without? Of course it depends on the specific protocol, but based on this evaluation of the cost of cache coherence protocols, it seems that 35% slower might be a reasonable figure for MESI. @sbly, sure this will take away from the performance of a cache, but it is still a significant speedup from a system with out a cache. As the link @cardiff seems to be dead, I would simply say its hard to say how much slower a processor with cache coherence is. This is because without cache coherence, a lot of things would be trash as we as the programmers have a basic assumption of how the memory system works. As the cache abstraction should not really change our thought process of how memory works, having no cache coherence can really cause some problems. Therefore, I think its hard and useless to compare the speed difference between a system that works with coherence, and a system that has issues without coherence. I believe I have fixed the dead link in my comment above. Thanks @ycp for pointing out the problem. To summarize, here are the definitions of these concepts. Deadlock "is a state where a system has outstanding operations to complete, but no operation can make progress" (slide). This is generally unrecoverable. Livelock "is a state where a system is executing many operations, but no thread is making meaningful progress" (slide). This is generally unrecoverable without outside intervention. Starvation "is a state where a system is making overall progress, but some processes make no progress" (slide). This may be recoverable, but is less-than-ideal. Mutual Exclusion: Some resource/lock is allocated to one process/thread at a time. Hold & Wait: Processes/threads must have the potential to hold a lock while waiting for more. No Preemption: No one is able to "give up" a resource. Circular Wait: A cycle can be defined in the resource graph (ex, process 0 needs something from process 2, who needs something ... who needs something from process 0). Deadlock MUST HAVE ALL FOUR REQUIREMENTS to occur. Absolutely. How do those four conditions manifest in this intersection example? Mutual Exclusion: Our critical section is space on the road. A space of road can only hold one car at a time. Hold & Wait: A car "holds" a resource by being on the road. It "waits" for the road it wants to travel on by simply stopping the car until the space is available. No Preemption: Assuming this intersection deadlock situation only allows cars to move forward, cars cannot give up the road they are on because they have nowhere else to drive to. Circular Wait: Each car needs the road in front of it to drive forward. Thus, the green cars need the resource occupied by the yellow cars, while the yellow cars need the resources occupied by the green cars, which forms a cycle. We can note that since the 3rd requirement doesn't hold in real life (since cars can move in reverse), we don't have true deadlock in this situation. Mutual Exclusion : Each unit of road space in the intersection is being held by a car, and each of those units can be held by at most one car at a time. Hold & Wait : A car holds a unit of road space, and can choose to remain there (assuming the civility of other road users :D), while waiting for the next road space it intends to consume to free up. In this situation, the green cars are holding a quarter of the intersection each, and each is waiting for a quarter being held by the yellow cars, and vice versa. No pre-emption : This one is slightly confusing; each car is in fact able to back-up and so "give up" its resource. It makes more sense if you think of it in terms of a supervisor/operating system; no pre-emption on Wikipedia is defined as 'The operating system must not de-allocate resources once they have been allocated; they must be released by the holding process voluntarily'. In this example, I suppose you could say that there's no supervising traffic light that'll force the green cars to back up and allow the yellow cars to pass. Each car has complete, individual control over whether or not to give up its resource. Circular wait : Each car is waiting for the road-space being held by its counter-clockwise adjacent neighbor. This is deadlock because neither animal is making any progress. The frog is holding the other animal's throat so it can't eat the the frog. The frog cannot really do anything because it refuses to let go of the animal's throat and therefore, can't move either. @analysiser not dying seems like a decent candidate, assuming the frog is out for revenge. It seems to me that the frogs are winning in both situations. I think the first example can also be interpreted as starvation, as I agree with iamk_d__g that the frog is going to be the winner here. Assuming the frog can still breathe (through its skin), the resource in contention here is air. However, the frog is preventing the stork from getting any air by blocking its breathing passages. In this case the frog will eventually let go or the stork will suffocate, and the starvation will end. Still seems like a bit of a stretch -- maybe we're looking into this a bit too much =D. Another example of deadlock happens with mutexes in a multi-threaded program. Say I have two variables, x and y, which are both locked so that only one thread can modify them at a time. There are two different operations: x = x + y; and y = y + x. If two threads enter these different sections at the same time, one thread will acquire the mutex for x and the other will acquire the mutex for y, but neither will be able to perform the operation because both mutexes are needed in each case. It's used in many databases to avoid deadlocks. Pretty much each transaction (unit of work) is divided into two phases, a phase during which it can only acquire locks and a phase during which it can only unlock locks. If all transactions follow 2 phase locking, then you can assure that all schedules are serializable. One very good intuitive example for this concept that was mentioned in class was the situation where two people bump into each other in a hallway, and then move back and forth in the same directions as they try to pass each other. In this situation, they are both still trying to do work (move past each other), but making no progress. So I'm curious, if a situation that would have cause a livelock to occur did not have a backing off mechanism, then wouldn't a deadlock occur instead? Thus would if be fair to say that deadlocks are a subset of livelocks? Livelocks often occur because there is a mechanism for recovering from deadlock that is repeatedly triggered, so the removal of the mechanism would cause deadlock to occur. However, by definition, you wouldn't be able to say that deadlocks are a subset of livelocks. Livelocks can be considered a special case of starvation though, I think. On the next slide it says starvation is not usually a permanent state. It seems that in general livelock is also not necessarily a permanent state (you could get lucky and eventually one of the restarts will be timed correctly to successfully proceed). On the other hand, if you somehow have restarting but the livelock can never be avoided even with luck (e.g., if your program has no branch that would not deadlock), then it doesn't seem to meaningful to distinguish between livelock and deadlock since the restarts do nothing. So I guess it seems like you could classify livelock as either starvation or (equivalent to) deadlock. As stated above, livelock occurs when there is a mechanism for recovering from deadlock that is repeatedly triggered, so if 2 people are in a corridor, they both try to get out of each other's way ensuring they are still stuck. The way to recover from livelock therefore is to ensure that only one process (chosen randomly or priority) tries to recover from deadlock so that the processes can proceed. In our analogy, only one person would try to get out of the other person's way, ensuring that both people can proceed. To rephrase what's been said above, livelock occurs when threads continue to change their states without making overall forward progress. Therefore it occurs in one of two cases: when threads have the ability to undo their computation, and when they can undo each other's computation. The former happens mainly in optimistic algorithms (instead of locking, try to do something and then undo it if there was a conflict), and the latter in deadlock avoidance (kill another thread if it interferes with you, or a set of threads if they're in a deadlock). Livelock looks exactly like deadlock to users, but in some ways it's even worse than deadlock, because livelocked threads are still consuming the computer's resources while deadlocked ones aren't. It seems strange to me to classify livelock as a subset of starvation since in starvation usually something is making progress and in livelock nothing is; I'm not sure I understand the distinction there. I can see it as a subset of starvation in the sense that they're both situations where the computer is constantly doing work, but the desired result is not occurring in terms of completed processes. How would a program detect livelock? It seems easy for a computer to detect deadlock occurring since nothing is progressing, but automatically sensing livelock would be a much harder problem. @asinha: one "random" choice that could help them get unstuck (eventually) could be chance in a concurrent system. At some point, one person may switch faster than the other and find his way through. Obviously, we'd prefer not to have to rely on chance, though, so it's better to explicitly use one of the fixes you mentioned. @mchoquet: I think the idea behind people saying livelock is a subset of starvation is that in starvation some processes can make no progress, whereas in livelock all processes can make no progress. By this definition, livelock could be an instance of starvation (all ==> some). I did notice that the next slide says that in starvation some process is making progress, which would then make them mutually exclusive. I think it's more useful to be able to generalize before looking at the specific differences in classification, though. @retterermoore: that's why formal reasoning about your code is better than testing (when possible). Livelock should be easily detectable through some form of verification. Then, rather than "detecting" livelock when it happens, you could prevent it. There might not be a single way to detect livelock in a system being executed, but if something is progressing through the same cycle of states repeatedly in an unusual way that's one indicator of possible livelock. From my understanding, if some processes make absolutely no progress, it's starvation. If processes can still make progress but very limited compared to some other processes, it is a fairness issue but not starvation. According to Wikipedia, this is incorrect---starvation is characterized by a "severe deficiency in caloric energy, nutrient, and vitamin intake." This comment was marked helpful 5 times. Starvation is usually caused by schedule policies with priority concept. If a process with higher priority never blocks, other processes with lower priority will rarely get resources. Another interesting example I learned from Wikipedia: Suppose Priority(process3) > Priority(process2) > Priority(process1) and process3 has dependency on process1. If process2 holds the resource for a long time, process3 will never move forward even though it has higher priority, as it cannot get results of process1. @chaihf I'm a bit confused by your example. The way I understand it, process1 and process3 are being starved while only process2 is progressing, but I would think that process1 would be able to run to completion once process2 releases the resource meaning process3 will also be able to move forward eventually. If you could give a link to the example, that'd be great. A special case of starvation is write-starvation in a system with a readers-writers lock that gives readers priority over writers at all times. This lock ensures that multiple readers can read a resource at the same time, but only one writer can write to the resource at a time. In this case, if there is a steady stream of readers and a writer is waiting to acquire the lock, then the writer will wait forever (or at least until the stream of readers dries up), which illustrates the principle of starvation. @devs? It is called Priority inversion. The basic idea is High priority process has to wait/block for the low priority process. @devs: It is called Priority inversion. The basic idea is a high priority process has to wait for/block by a low priority process. This is an important problem for the real-time system. So, to avoid starvation in this case, we need to have some police or traffic light to guarantee the fairness. The shared bus here is atomic in terms of transaction. That means only participants of a single transaction can use bus during the transaction. Here is An example from class. A cache asks for data in memory. With an atomic shared bus, this read transaction would occupy the bus until the data is sent back from the memory, even though there may be a period when the bus is actually idle. With a non-atomic bus, other transactions are allowed when the bus is idle. Question: After sending address and appropriate command on the bus, why does it have to wait for the command to be accepted? I thought that requesting for bus and getting the bus grant had already guaranteed that the cache controller has the exclusive access to the bus. Correct, but "wait for command to be accepted" on this slide meant wait for all other processors to acknowledge that they have seen the message. An important realization here was that steps 3-7 constitute the "shoutout for data" in the snooping based coherence based studied before. Thus these 5 steps need to happen atomically. For a uniprocessor, aren't step 4 and 7 unnecessary? since the single processor is guaranteed to have exclusive access to the bus, right? There are other things connected to the bus that also need access (most obviously the memory, but potentially other non-processor devices as well depending on the system) -- a bus with only one device on it wouldn't be very useful! This problem actually relates to the STARVATION. If the processor keeps receiving priority, the cache can't get chance to do the tag lookups and thus hold up the whole system. On the other hand, if the cache keeps receiving priority and will prevent any further performance continue on the local processor. @azeyara, I'm a little confused by your comment. Could someone please give an example of how this enhancement would work, and when it would help? Thanks! Assume a cache read miss in MESI: 1 should the memory not respond as long as there's a shared copy in other's cache? 2 if the line is dirty, should it be flushed to memory? @iamk_d__g I think the answer to both your questions depend on the implementation. To elaborate on Yuyang's response, I believe it would depend on the type of cache that is in use. In this case, we have a write-back cache, so on a cache read miss, the data must be fetched from other caches if the line is dirty and memory if it is not. If this were a write-through cache, then the cache would only need to ask memory. However, if the line were dirty, it could also ask the other caches depending on the trade-off in latency between implementing such a protocol and the benefits of fetching from other caches vs. fetching from memory. If I recall correctly from lecture, whenever someone issues a "shoutout," the snoop-valid wire is set to high, and as each processor responds, it stops asserting high, and we know that every processor has responded to the shoutout when the snoop-valid wire goes to zero. I think I'm a little confused on what exactly causes the valid wire to be set to high, or who initially sets it. The idea that every processor stops asserting on the wire once it responds to the shoutout indicates to me that we require each processor assert high immediately after the shoutout, and that the valid wire is set to high as a result of each processor asserting high on it. If this is true, what if we have a scenario like the fetch deadlock example toward the end of the lecture, and every processor is issuing a BusRd and waiting for everyone else to respond instead of responding itself? Wouldn't the valid wire never be set to high? Take the example that Processor A wants to read a variable x; by the MESI protocol it must wait for all other processors to check to see if the data is in their cache lines to determine whether to load to Shared or Exclusive. First Processor A get permission from the controller to shout a BusRd over the bus. Processor B and Processor C are listening to the bus at this. When they hear the BusRd, they immediately drive a high voltage through the valid wire and begin to process the request (determining whether the data is in the current cache line, whether the line in the cache needs to be flushed or not, etc). As soon as B and C finish their tasks, they immediately assert low on the valid wire. Now keeping this in mind, we can respond to the situation caused by issuing many commands to read from all processors at the same time. First of all, each processor has to get permission from the bus controller before it will issue a BusRd shout. However, even if a processor is waiting to issue a command, it can still respond to signals that are conveyed across the bus. To quote the slide "To avoid deadlock, processors must be able to service incoming transactions while waiting to issue requests". Thus even if a processor has a read operation trying to be executed, on a well-designed snooping system this will not lock out the computational capacity of the processor or processor cache. A possible implementation of such a system that uses a separate bus controller and processor control for each cache could be extended to allow processors to wait for permission on a task while simultaneously looking for cache lines. However, I am not sure if a sort of cache ILP is how this is actually implemented. This comment was marked helpful 6 times. Question: Is it correct that a wire could only have 1 or 0 as its value? Does high mean its value is 1? yes, I think that is the correct way to think about it. Somehow it seems that there should be more than what is described here: We are relying on the Snoop-valid wire because we cannot tell when every processor has updated the voltage on Shared and Dirty. But if so, how do we know that every processor has raised the voltage on the Snoop-valid wire in the first place? What if processor A does not set Snoop-valid to high in time, and the rest of the processors do their thing, and as a result processor A loses its chance to respond? I am guessing the underlying problem is that checking with the cache for the shared/dirty bit is slower, and also more inconsistent in terms of the time it takes, relative to just responding and immediately raising the voltage on the Snoop-valid wire. Can someone with more hardware knowledge confirm/correct this? We know because the bus isn't really just a set of wires, which connects a bunch of processors, memory and I/O devices. In addition to that, there's a ton of logic that exists in reality, which is responsible for determining things like, who gets the bus and when, what devices said what, "oh something important happened, let's interrupt all processors to do ____". Think of ^ that as the control bus. Maybe this picture will help? So to answer your question, I'd guess the Shared, Dirty, Snoop-Valid lines in the diagram could be part of the control bus, which uses a form of logic + some enable bits + a clock to make sure everything gets communicated on time. To add to vrkrishn. We can easily design a system where all processors caches set the snoop valid line to high after another processor is granted access to the bus. This can be achieved using a simple hardware thread in each processors cache. This thread must set the snoop valid line to high the cycle after seeing another cache get granted the bus. This would guarantee that the cache that was granted the bus does not see the snoop as being ready before other caches have responded. After the hardware thread sets snoop valid to high, the cache can set it to low after asserting the correct values on the shared and dirty lines. @sluck, to address your question about a possible fetch deadlock, this won't happen because we have ensured in this example that we have an atomic bus which means that no other commands happen between the BusRd/BusRdX and the processor receiving the data. (Refer to previous slides). Can someone explain point #2 of variable delay? Does the memory wait until it hears otherwise or does it continue and then backtrack if it hears otherwise? @RICEric22, I think it deals with when cache line is invalid, then memory will response only if it hears all caches response 'cache miss' or something else that indicates all of them don't have that address. So if snoop can be done quickly, memory may response in less time than fixed clock cycles. @RICEric22 I'd imagine the memory still works the same way. I don't see the need for memory to not employ backtracking, provided the overhead isn't massive. Variable delay seems to essentially be the same as a fixed number of clocks but with finer granularity. It won't need to ask every cache to reprocess the request if one fails to meet the cycle count deadline. Question: Since main memory is also connected to the caches by the interconnect, and so it's also listening to the snoop-valid, dirty, and shared wires, why can't it just, when snoop-valid is 0, immediately react to the request if needed? This seems to be the most intuitive and efficient way, and it doesn't seem to be more complex than the fixed number of clocks way. @idl I think what you're trying to describe is variable delay, but more specifically. Since it may be a variable number of clocks until snoop-valid is 0. But as mentioned in lecture, it is a lot less hardware to assume X cycles, rather than add complexity for this logic. @idl, in addition, having an access to memory when it could possibly be present in another cache, is simply wasteful. Using a write-back buffer maintains coherence because reads are allowed to move ahead of writes in a memory consistency model when the W->R memory operation ordering condition is relaxed (e.g. total store ordering or processor consistency). We sacrifice sequential consistency for better performance. @tomshen, I do not agree. I think write-buffer maintains coherence because write-back buffer is an extension of cache, with status and tag. What you said is about memory consistency, not cache coherence. @yanzhan2, I'm somewhat confused. I think I understand tomshen's explanation of why this maintains consistency, but could you (or a fellow student) elaborate on why this maintains coherence? I remember Kayvon offering a different explanation about how the order in which caches claim lines in the modified state is the total order of all writes, and I see how this is maintained because write buffers get flushed when other caches want to read/write to that cache line, but the more ways I understand coherence/consistency the better. @mchoquet, Coherence always relates with one memory address(or one cache line). Consistency relates to the ordering of load/store operations to all address. Which means memory operations to different address, such as load A, store B, would not cause coherence problem. Adding a write buffer, without give up the state M, would not cause coherence problem. If other caches want to read/write data in the write buffer, same operations happens again as if the data is in the cache. So this means write buffer is kind of an extension of cache. I agree with your explanation above. I think one way to think about coherence here is to note that when we are evicting cache line X, we must have made a request to read a different cache line Y. Otherwise we would've had a cache hit and there would've been no eviction. And since we're dealing with two different memory locations, coherence isn't violated. But we do have to be careful of new reads/writes happening on cache line X while it is still in the write buffer (not yet in memory). Not handling this correctly will mean that coherence is violated. Question: How does the "Tags and state for P" and "Tags and state for snoop synchronized? I can see how the snooping tags may be updated based on the bus information issued by the processor side cache controller. But how does the tags on the P side knows when the snooping disables on some tags? wouldn't that be very important? I don't know the details, but any change to the stored tags ought to correspond to a change in the actual data stored in the cache, right? I imagine that your guess is right, and both copies of the tag are updated at the same time as the cache data itself. I think that in most modern processors, when a cache line is updated, the tag bits can also be updated in the same clock cycle. It should be trivial to add the extra wires from the cache controller which are connected to the correct address register. I don't think the processor has to 'stall' for a cache line's tag bits to be updated - since it is done in one clock cycle the processor-side controller will just notice that the slot in the cache which previously had a different line or no line now has a line with some tag. If the processor was completely waiting on getting the new data into the cache, then the processor would be stalled. If the processor has hyperthreading then it might have switched to the other hyperthread while waiting for the line to get into the cache, and in that case the processor does not have to stall. Same with single-thread ILP. Can someone explain how is having a line in the dirty state different from having a write-back buffer? I mean you could just mark the line dirty and leave it there. You only need to flush when another processor asks for it anyways. You can't avoid/delay that even if you have a buffer. So what's the point of a buffer in the first place? @achugh I think you also need to flush when the processor's cache is out of memory. Hence, the point of the buffer is to allow the processor to obtain the new line without waiting for the write back to complete in those cases (someone correct me if I'm wrong). In this diagram, the green region is the logic performed when the processor makes a change to a piece of memory, and the yellow region is the logic performed when some other processor makes a change to a piece of memory and this processor receives a request from the bus. The red region highlights that when this processor receives a (write) request from the bus, the cache isn't the only thing checked to ensure correctness. The write-back buffer maintains its own tags for each item to be written back so that, on a BusWrX, values that are to be written back to memory that have since been modified by another processor are in fact not written back. Thank you @uhkiv. Your answer is very clear. So the write-back buffer is used to hide the latency. However, the dirty bit is based on the belief that the dirty cache may be changed again in the near future. Ideally what could happen is for P2 at the end to notice that its pending BusUpg request is wrong, and modify it to a BusRdX instead. I'm guessing this is a very hard thing to do! For those of you keeping score at home, I think the optimization described introduces a problem where two processors may write to the same memory location, but they don't tell the otter processors until the write is complete. As a concrete example, Jill may load memory address foo, then write 3 to foo in her local cache and put it back in global memory. Sometimes while this is happening, her buddy Jack loads foo into cache, increments it, and then tries to put it back. Finally, Jill remembers to tell everybody, "Hey y'all, I wrote 3 to foo" and assumes everybody knows what's what. Now Jack is confused - did he increment correctly? We'll never know! So instead, we can use a write-back buffer. But why doesn't a write buffer have the same problem? It seems to me, based on the description in slide 31, that the effects of the write buffer only kick in when Jill's cache line containing foo is evicted. Wouldn't it be more important to ensure the invalidation message gets broadcast at an appropriate time? First and foremost, I LOVE otters!!!!!!!! otter processors are sooooo cute!!!!!! Sometime ago, I acquired M state for the cache line X, I modified it, and it stays dirty in the cache line (because we are using write-back instead of write through). However, later on I need to access another cache line Y which conflicts with cache line X, so I must evict X and bring Y in. However, this is expensive, because we have two data transactions. So to cover that up, we bring the Y in, and put the dirty line X into the write-back buffer (until the data line is not so busy and we could flush X to memory). Remember that at this point of time, no other processor has cache line X valid now because I have it in the M state. Now, if some processor want to busRd or busRdx X, I will check both my tag and state and also my write buffer. I will see that I have X in the modified state, so I will respond with "Hold on I have it dirty, you need to wait for me to flush it to memory", and then do the actual flushing. This comment was marked helpful 9 times. @yuyang so re: your example, how is putting the dirty line X into the write-back buffer and flushing to memory at a later time better than just immediately evicting and flushing X? My understanding is that if X is needed again even before it is flushed, that processor can just load it back without needing those extra data transactions, since it is already in the M state. @idl another reason to buffer was covered here. Basically, by buffering writes, the cache doesn't have to wait while memory is handling that write. This can lead to higher throughput. But as others have mentioned above, this optimization leads to complications in regard to maintaining coherence. I think that to say that this situation is a deadlock depends on how much time does p1 need to wait before it gets cache line A. What if p1 gets what it request immediately ? Then the situation of waiting is not a long lasting one to call it a deadlock. Am i right? If p1 gets the request before the BusRd for B appears on the bus, then there is no deadlock. Like other cases, deadlock would happen when two different controllers request each others' resources, but neither can request and respond at the same time. In this case, if P1 asks for A, and then is sent A before anybody requests B, then there is no deadlock. Since like you said the amount of waiting determines whether or not there is deadlock, there would be a race condition and in general probably makes a not too reliable system. To have p1 be able to service incoming transactions while waiting to issue request is to break the circular wait condition in the 4 deadlock conditions. This will ensure that deadlock doesn't happen. Does this assume the processors are sharing line B? If so, then the two processors could be trying to access different memory addresses that both map to line B, right? If not, then the processors must necessarily be accessing the same memory address, is that correct? Otherwise, P2 wouldn't care whether P1 invalidated the line or not, cause it's different data. @tcz The smallest level of granularity in a system is a single cache line. A processor's cache handler announces its intention to write to a cache line; it doesn't specify which address it is writing to. If there are any other processors holding that cache line, they must all invalidate their cache lines; it doesn't matter if those processors are using a set of addresses disjoint from those that the writing processor is modifying. See false sharing for more detail on this. @arjunh, I agree the snooping unit is cache line, but when cache controller put request on the bus, address is also recorded on the bus (slide 32 has Addr comparison), otherwise the processor can not know whether it has the same cache line. When we use the policy where multiple processors are competing for bus access and this leads to starvation, will the state of starvation be permanent or not? @elemental03 I dunt think there will be starvation if using FIFO arbitration. Starvation can't really be permanent unless the flow of work is continuous and never decreasing - progress is being made somewhere and eventually the processor will get bus access (also, if it's random, then it's very likely access will be obtained eventually, it's just the time frame that is an issue). Question: Is it true that a system is atomic $\iff$ it is race free? @idl, yes. Atomic transaction is the highest standard and therefore the most costly for multi-threading/multi-process cases. Therefore it is safe to say atomic system is race free.
2019-04-20T08:12:29Z
http://15418.courses.cs.cmu.edu/spring2014/lecture/snoopimpl1
I signed up for another bunch of cooking classes at the community college including three more vegetarian cooking classes from my favorite instructor-chef, Alyssa Moreau. This recipe for Thai-Flavored Ratatouille is from her Vegetarian Local Style class and was my favorite out of the four dishes we made. It takes some of the basic ratatouille ingredients like eggplant, zucchini, onion, and bell pepper and puts them in a delectable Thai-inspired sauce. Heat olive oil in a large skillet on medium heat, add onion and saute a few minutes until they begin to soften. Add in the rest of the vegetables, (through the garlic), and cook until crisp-tender. Add in the sauce, cover and simmer about 5 minutes, then add salt and pepper to taste and garnish with cilantro. Serve over rice. Alyssa notes that she likes to steam her eggplant first to partially cook it. Also tofu or cooked sweet potato or yam are nice additions. Blend all together and adjust flavors to taste. Notes/Results: Delicious! The sauce is incredible; thick, creamy and slightly nutty with the almond butter. It has just a hint of spice but you could add more chili flakes or chili paste for more of a kick. I served it on top of a brown rice pilaf mix with sliced almonds on top and wished I had more; it was so good. If you aren't feeling the veggie vibe, adding some chicken or shrimp would be good too. This simple, flavorful dish is a keeper for sure. One of my favorite magazines is Natural Health, which is always full of great articles and tips as well as fun healthy recipes. The July issue was no exception and I have several dishes tagged to make. The one that intrigued me the most was this one for Watermelon Soup with Feta from the "Easy No-Cook Meals" feature. With fresh watermelon and pineapple blended and topped with feta cheese, mint and cilantro, it sounded refreshingly different and good. This recipe might not be everyone's cup of tea (or rather cup of soup), but if you can suspend any beliefs you have that watermelon belongs with sweet ingredients (instead think of either watermelon gazpacho or watermelon salad with basil and feta), and that soup has to be hot to be good, you will probably like it. Natural Health says: "Cold soups made with juicy summer fruits like watermelon and pineapple require only a sharp knife, a blender, and a few tangy garnishes like feta cheese and some mint sprigs. Watermelon is loaded with lycopene, a powerful antioxidant that can help protect your skin from sun damage and flush LDL ("bad") cholesterol out of your body." (BTW, both watermelon and pineapple are high in Vitamin C too.) "Buy and store uncut watermelon at room temperature for maximum flavor and lycopene content. This recipe also includes chunks of pineapple--if you can't find it, use strawberries instead." "The combination of watermelon and feta delivers a sweet-sharp one-two punch that's refreshing--and addictive." In a medium-sized bowl, combine 1 cup of the watermelon with the mint and cilantro and set aside. In a blender, pulse the remaining watermelon, pineapple, honey, and sparkling water until smooth. (If your blender is small, work in batches). Using a mesh strainer, strain the soup into a large bowl or soup tureen, then serve into 8 bowls. Top each bowl with 1/8 cup of the reserved watermelon mixture and a teaspoon of the feta. Garnish with mint. Per serving: 89 calories, 1g fat, (0.5g saturated0, 21g carbohydrates, 1.8g protein, 1.4g fiber, 38mg sodium, (2% Daily Value). Notes/Results: Good! Cold and refreshing. You taste the sweet, fruity flavor of the watermelon and pineapple and then get the salty taste of the feta and the fresh herbal taste of the mint and cilantro. When reading the recipe, I kept thinking about pairing basil and mint together with the watermelon (as much as I love cilantro as the recipe asks for), so I tried it that way too. Both were good but I think I prefer the basil in the topping a bit more. I used some small sweet local watermelon and fresh pineapple. In fact, with the exception of the feta and sparkling water, the whole dish was made with fresh local ingredients. The watermelon rind cut in half and the flesh scooped out makes a fun soup bowl or a larger one would make a great a soup tureen, but the soup looks equally pretty in a small glass bowl, dessert dish or goblet. Pull this recipe out in the dead of summer when you can't bear to spend time cooking or turning on the oven. First up is Natashya at Living in the Kitchen with Puppies who treated her hard-working and dehydrated hubby to some Gazpacho after a long, hot afternoon of yard work. Using Ina Garten's recipe as a jumping off point, she spiced it up just the way she likes it. Natashya says "it came out delicious" and was "very refreshing indeed." Ulrike from Küchenlatein is here with a delectable bright green Spinach Soup with Parmesan Crisps and Sour Cream. Ulrike adapted it from a vegetarian cookbook by Gabriele Kurz, a native-German chef living in Dubai. In addition to the baby spinach, this soup combines potatoes, garlic, cream, salt and pepper with a yummy garnish of some thick sour cream and crisp Parmesan cheese crisps. Having green purslane growing in a pot along with a jalapeno chili plant gave Graziana at Erbe in Cucina the idea to combine these two ingredients into one delicious Hot Green Purslane Salad. Graizana made this one simple using just purslane leaves and green chilies (she says you can use any green chili pepper you like), and dressing the salad with olive oil, vinegar and salt. Fresh local cooking at it's best! Welcome to Jamie from Life's a Feast, joining Souper Sunday for the first time, all the way from Nantes, France. Here with a Zucchini Carpaccio served with a Mozzarella Tomato Salad, that she says "is the ideal summer meal, light and cool, healthy and a snap to put together." Jamie says you can serve this delicious salad, "as is, with a bit of fresh bread if you like and a bottle of chilled white or rosé." Thanks for sharing it with us Jamie! Kristen from Whatcha Eatin'? is back this week with her favorite easy sandwich to share, the Chicken Salad Wrap. Made even simpler with store bought rotisserie chicken and kept healthier with light mayo, the chicken salad is nestled in a wrap of your choice. Kristen says she "will make a bunch on a Sunday and have wraps ready for the week for lunches." Quick and delicious--what could be better for summer?! A smaller turnout this week for Souper Sunday, but some very creative recipes that are perfect for summer. Thanks to everyone who participated. If you have a soup, salad or sandwich you want to share at Souper (Soup, Salad & Sammie) Sundays, click the logo on the side bar for all the details. When the weather is warm and muggy, nothing is better than pulling open the refrigerator and being able to pull out a big pitcher of a thirst-quenching drink. Finding this Strawberry-Basil Iced Tea in the latest Martha Stewart Living, it seemed the perfect choice for a "Simple Saturday Sipper". Using fresh, sweet Waimea Strawberries from the Big Island and basil from my CSA box made this especially fresh, and using a berry-flavored black tea bag allowed me to cut down on the sugar. Black tea is full of antioxidants, basil has lots of Vitamin K and strawberries are filled with fiber, Vitamin C and other good for you stuff, so this sipper is not only delicious--it's good for you too! Bring 4 cups water to a boil in a medium saucepan. Add tea bags, and let steep for 5 minutes. Place strawberries in a bowl. Bring water and sugar to a boil in a small saucepan, stirring until sugar dissolves. Remove from the heat, add basil, and let steep for 10 minutes. Strain over strawberries, discard basil. Toss to coat. Let stand until cool, about 25 minutes. Combine strawberries (with syrup) and tea in a pitcher. Refrigerate until chilled. Serve over ice, and garnish with basil. Notes/Results: Delicious, cool and refreshing, this is a wonderful drink for summer. The basil comes through with a nice subtle herbal taste, but since I like even more intense basil flavor, I think will up the amount of basil leaves and steep it a bit longer next time. Using a natural berry-flavored black tea enabled me to cut the sugar down to about 1/3 cup, so I reduced the water for the syrup to 3/4 cup. I served this in tall glasses and garnished it with plenty of fresh basil. Be sure to include a tall ice-tea spoon to scoop up all those yummy strawberries. This is a keeper recipe for sure. What is better than enjoying delicious food with good friends? Having friends that are incredible cooks. My friend Natalie had a small group of us up for Spanish tapas this week. I have known Natalie for about 13 years and she is one of the best cooks I know, as well as a recent addition to food blogging at Ask Natalie Hawaii. The food was divine, (Natalie outdid herself with many different pupus to try), the wine brought by everyone was delicious and the company was wonderful. It was a chance to escape from the world's craziness and we all left full and happy. Here's a few shots of just some of the food we enjoyed. (Please note that my photography skills decline in relationship to the amount of wine and food I consume). This is black garlic (read about it on Natalie's post here). This was my first time trying it and I liked the earthy, slightly licorice taste of it. Nat let me contribute a couple of cold tapas so I brought the Gazpacho that I made for Barefoot Bloggers (here) and it was perfect as "gazpacho shooters" to start our meal. This is Goat Cheese with Mojo Verde that I made from The Barcelona Cookbook by Sasa Mahr-Batuz & Andy Pforzheimer. This is the current cookbook I am reviewing and I just started trying recipes from it, (hopefully the review will post sometime next week). This yummy dish passed the "foodie friend test" with flying colors! You can find a copy of this recipe here. This doesn't begin to cover the spread of food we enjoyed: rice, lots of veggies, Spanish ham, cheese, grilled bread, etc. It was a lot of food! Mahalo again to Nat for hosting such a great night! Since I made two Tyler recipes last week and combined them into one dinner, I did myself one better and made three of his recipes this week. I started off by noticing the recipe for Grilled Chicken Breast with Ginger and Soy on the Food Network site. While trying to decide what to serve with it, I noticed that it was part of a menu for a How to Boil Water episode along with Cold Sesame Noodles and Sweet Chili Cucumbers. Since the noodles have gotten great feedback from the Tyler Florence Fridays members who have tried them and I love me some marinated cucumbers, I decided to make the whole dinner as my TFF pick for this week. I made a few changes of course, including reducing the salt and oil in just about everything (Bad Tyler!), and using buckwheat soba noodles and almond butter in the noodle dish. A great make-ahead meal for a hot summer night as you can marinate the chicken, make the cucumbers, boil the noodles, put it all the fridge and then just pull it out when you are ready to make dinner. You simply grill the chicken, blend the sauce for the noodles, add the finishing touches and it's ready to serve. I am just posting the entree recipe for the Grilled Chicken Breast with Ginger and Soy here. You can find the link to it and also the links to the recipes for the Cold Sesame Noodles and Sweet Chili Cucumbers here. Notes/Results: An easy and delicious dinner and perfect for the warm, muggy evening we had. For dinner I served the whole chicken breast warm over the noodles with the cucumbers on the top and the sides. For lunch the next day I had the cold chicken sliced over the noodles and cucumbers and enjoyed it even more as both the noodles and the cucumbers have more flavor on day two. Chicken: The chicken is very flavorful and tender. I think having reduced the amount of salt I use overall, that I may be a bit "salt sensitive." Even though I used a reduced-sodium soy sauce, reduced the amount by about half and didn't salt the chicken after removing it from the marinade (yikes Tyler!), for my first few bites I thought it might still be too salty for me but I ended up liking it as I ate more. I reduced the amount of olive oil too, using about half the amount the recipe called for and increased the lime juice a bit, using a whole lime for the half recipe of chicken. Noodles: Yum! I can see why these have been a popular choice at TFF, they are creamy and good. I did reduce the oil and soy sauce slightly and upped the chili paste and rice vinegar. I used buckwheat soba noodles for the base and freshly ground almond butter. I find I really like the flavor of almond butter in dishes like these--still nutty but more subtle than peanut butter, plus I almost always have it around my house. I will be making these again. Cucumbers: Crispy, cold, slightly sweet and with a kick from the chilies, these are a perfect refreshing little side dish with lots of flavor. All together a great dinner and wonderful lunch the next day! You can see what recipes the other Tyler Florence Friday participants made this week and find out how they turned out by going to the TFF site here. Our final Barefoot Bloggers recipe for June is Gazpacho, selected by Meryl of My Bit of Earth. I am a Gazpacho fan and have made several different recipes for this cold Spanish soup, but had never tried Ina's recipe so I was happy to get the opportunity to make it. Another bonus is that there was no butter to cut out, a semi-rare thing in an Ina recipe. Although this is generally a pretty healthy recipe, I did find two things to cut down on; olive oil and salt and I reduced both, (the amounts I used are in red below and are for a half batch of the soup--which still in Ina fashion, makes a pretty large amount). In The Barefoot Contessa Cookbook, Ina recommends a particular brand of tomato juice called Sacramento Tomato Juice and since I didn't see it at either of the two stores I went to, I went with the on-sale V-8 Vegetable Juice, (the low sodium version because you can always add a bit of salt to taste and you don't get the massive amounts of sodium that are in the regular version), which worked perfectly. Finally, there is just something about gazpacho that calls out for shrimp, so I served mine in a margarita glass topped with fresh parsley and basil and hooked some of the plump pink beauties on the rim--perfect for a summer party. In addition to The Barefoot Contessa Cookbook, this recipe is also at the Food Network site here. Notes/Results: As usual Ina's recipe is great; simple and very flavorful and so good on a hot day or night. Cutting the oil and salt still resulted in a delicious soup--in fact I think even using the low sodium vegetable juice, if I had used Ina's amount of salt it would have been too much. The sweetness of the shrimp is perfect with the savory soup and helps round it out, making it feel more like a meal. The keys to a good gazpacho are great, fresh vegetables, (says the girl who used the on-sale V-8!) and letting it set in the fridge overnight to get nice and cold. When I have made or been served less than stellar gazpacho, it usually relates back to these two things. The fresher and sweeter your veggies, the better your soup, and letting it sit for 12-24 hours allows the flavors to mature and meld together. I also like to get my onion pretty fine, while keeping the other veggies chunkier. There is no worse gazpacho buzzkill in my book than biting into a big old hunk of onion, so I give them a few extra pulses which also makes for a thicker broth. Although I made just a half batch of this recipe, I had enough for a couple of glasses and the rest is going to accompany me to a small tapas party tonight where it will be served in small juice glasses with a shrimp hung over the side of each one as "Gazpacho Shots." This was a really great pick--thanks Meryl! You can find out more details on the Barefoot Bloggers as well as see what the other BBs thought of this recipe by going to the site here. As much as I love all the exotic tropical fruit available here, (my dinner last night consisted of large bowl of sweet, cold mango and toast), I sometimes crave other fruits that are not local or as readily available like blackberries, peaches, plums and apricots. We do get them here, but usually they look pretty worn by the time time they have traveled all those food miles to get to the grocery stores. I always consider myself lucky when I find these fruits looking good and fresher than the norm, and I was happy to find some large and healthy looking apricots at Whole Foods. I ate several and then decided to make something with the rest. Craving homemade ice cream, I found a great-sounding recipe for Apricot and Cardamom Yogurt Ice Cream in Ice Cream! by Pippa Cuthbert and Lindsay Cameron Wilson. I have several ice cream books and Ice Cream! is a fun little book with lots of easy recipes with interesting ingredients and different, exotic flavor combinations. Ice Cream! says: "Apricots go well with cardamom. The little black seeds inside the green cardamom pods give a strong but subtle flavor. With the addition of yogurt, this ice cream has an almost Indian slant." Halve the apricots, remove the pits and chop. Put the chopped apricots, sugar, water, cardamom pods and orange juice in a large saucepan and bring to a boil. Cover and simmer until the fruit is tender, about 8-10 minutes. Remove the cardamom pods, transfer the mixture to a food processor and process until smooth. Allow to cool completely. Stir the yogurt into the cooked mixture and churn in an ice cream maker, according to the manufacturer's instructions, until frozen. Transfer to an ice cream container or ice block molds and freeze. Put in the refrigerator 20 minutes before serving. Note: If you are not a fan of cardamom and omit it, you're left with a delicious creamy apricot ice cream all on its own. Notes/Results: Good--both tangy and sweet, the cardamom and apricot really pair well together. I cut out about 1/3 of the sweetener, so mine may have been more tangy than the original recipe. I also really love cardamom so I put in double the amount, (six pods), and the flavor came through nicely. I used non-fat Greek yogurt to cut out some of the fat and calories and make it a healthier option. This ice cream, topped with some crushed pistachios, would be the perfect end to a great Indian meal. I would make this one again. I am sending this along to the month-long Ice Cream Social being hosted by three wonderful bloggers Tangled Noodle, Scotty Snacks, and Savor the Thyme. They will being doing a major round-up of everyone's ice cream and frozen concoctions after the end of July. Half of me is Scandinavian, (Danish & Swedish), but I do not know much about the food or the culture around food in these countries. Besides the occasional meal of Swedish pancakes or Swedish meatballs and the few times the Danish Ebelskiver pan was brought out and we enjoyed the little stuffed pancakes, we didn't eat many Scandinavian dishes. Because of this I was very excited to receive a copy of The Scandinavian Cookbook by Trina Hahnemann to review. This is a gorgeous book that not only explores the food from Denmark, Sweden and Norway, but also celebrates the culture of these countries. Trina Hehnemann is a chef, food writer and published cookbook author who lives in Denmark. She started out catering for rock stars like Elton John, Bruce Springsteen and The Rolling Stones and today owns and runs cafes in Denmark. The book has 115 recipes divided by months and grouped into seasons to make the most of the local foods available in the Nordic region. The photography is gorgeous, (done by Lars Ranek, one of Scandinavia's premier food photographers), and features beautiful shots of the recipes, the ingredients and the countries themselves, making this the kind of cookbook you want to read and enjoy. Each recipe or grouping of recipes has notes about the history and customs of the dish, so I found myself learning a lot going through the book and selecting recipes to try. Hahnemann set out to show that modern Scandinavian cooking has "evolved" from the more traditional recipes and many of the dishes take inspiration from other countries and cultures while making the most of local ingredients. Having a busy few weeks, it took me a while to work myself through this cookbook, selecting recipes that were appropriate for the season and the ingredients I have available here in Hawaii. Asian ingredients are no problem here and I can do fairly well in Mexican and Indian products, but you start getting into the European countries and sourcing recipe components gets a bit more challenging. None the less, I manged to cook a variety of dishes, most all very successfully and I found a few new favorites. The first recipe I chose to make was Meatballs in Curry Sauce, where small meatballs of pork and veal are boiled, then simmered in a curry cream sauce with leeks, carrots and apples. Being a huge curry nut, I liked the change from my usual curry recipes to this one. It was hearty, nicely spiced and delicious. Loving smoked and cured salmon, I thought it would be fun to try the Marinated Salmon recipe in the book. Cured in the refrigerator with sugar, salt and citrus zest, then frozen, defrosted and sliced thinly, it has a slightly sweet and citrus taste. In addition to snacking on it and eating it on bagels and toast, it found its way into an open-faced "Smorrebrod" sandwich that you will see below. The only recipe I really struggled with was the Rye Bread. As I frequently lament, "I AM NOT A BAKER! (or apparently a bread maker either), therefore I am more than willing to shoulder the responsibility of my bread turning out to be a hard, slightly too salty, somewhat funky tasting lump. I did follow the recipe but more detail and specific instructions would have helped a bread-making neophyte like myself. For example, "Cover with foil and let stand for 3-4 days at room temperature (77 to 86 degrees F.). And there you have a sourdough starter," didn't give me enough direction to really know if my starter was ready. A description of what my starter should look like when ready to use and more technical details would have helped me, but of course, a more experienced baker might have been just fine. I did save out some of the starter and may try it again, although it was a bit high-maintenance for me. I had better baking luck with my old nemesis...yeast, in the wonderful Brunsviger, a soft, bread-like cake from Denmark with a brown sugar-butter glaze. It was good that I halved this recipe and made just a small Brunsviger as this tender cake and it's sweet topping are addicting. I enjoyed it with some mango-ginger black tea; not necessarily a traditional pairing but oh-so good! For the Smoked Salmon and Horseradish Cream with Crunchy Cucumber and Caraway Seed Salad, I used some really good smoked wild Alaskan salmon. The revelation on this one was the dressing, which with the light sour cream and kick of horseradish was delicious. The combo of flavors in this salad was right on and I loved the caraway seeds. A simple, light lunch or dinner, I will make this one often I think. The leftover dressing ended up on my new passion the "Smorrebrod" too. A couple of other salads caught my eye; the Cauliflower with Coarse Almonds, where raw cauliflower is cut into small florets and tossed with a dressing that includes whole almonds that are coarsely ground, garlic, lemon and fresh chervil or parsley. Yum! The Carrot Salad with Parsley and Pine Nuts was also quite good and simple with its shaved carrots and toasted pine nuts dressed in lemon juice and olive oil. Both are perfect with a sandwich and great for a hot day. Finally my new favorite thing....the Smorrebrod, which the author defines as "open-faced sandwiches made with rye bread, and preferably served with aquavit and beer. In the old days people ate very simple ones, such as rye bread with a slice of cold meat, and took them to work as a packed lunch. In the early twentieth century, decorated smorrebrod became fashionable as a late dinner, after theater, or in dance clubs where the guests did not want to spend hours sitting down to a meal and instead wanted to spend their time dancing. Smorrebrod are delicious and luxurious but do not take a lot of time to eat." I started with one of the recipes: Smorrebrod: Open-Faced Sandwiches with Flounder, Shrimp, and Basil Dressing. Not able to find flounder here, my Whole Foods fish guy led me to the closest thing he had, Dover Sole. The fillets are breaded in rye flour, cooked and placed on a slice of rye bread covered in lettuce, then topped with creamy basil-lime dressing and cooked shrimp. Delicious. I liked it so much that I used the leftover basil dressing to make my own Caprese Smorrebrod, using the tomatoes I picked up on the North Shore and some fresh mozzarella and basil. Again...Yum! And of course some of my other leftovers, the Marinated Salmon and Horseradish Cream made an excellent Smorrebrod with some capers and green onions. Quick to make, easy to eat and a perfect light lunch or dinner, I foresee a lot of Smorrebrods in my future! I have a bunch more recipes tagged to make in this book; everything from Captain's Stew to Lemon Mousse, Fish Cakes with Herb Remoulade and Dill Potatoes, Oxtail Ragout, Skagen Fish Soup and maybe even some "Glogg" for Christmas this year. A beautiful book that is a delight to read and full of great recipes, The Scandinavian Cookbook would be perfect for the experienced Scandinavian cook or for anyone who wants to learn more about the food and culture of these countries. With the beautiful photography and delicious recipes within, I am happy to have a copy on my bookshelves. A lifelong lover and devourer of books starting when I was very young with my Dad reading to me, they were a huge part of my childhood. That's why every now and then I still love to read a children's book, whether a classic like The Borrowers or Little Women, or something newer like the Harry Potter series. I was happy to see that Rachel, one of my fellow co-hosts, (along with Jo), of Cook The Books (our virtual foodie book club), had selected a classic English children's book, The Little White Horse written by Elizabeth Goudge and published in 1946, for the group to read. I was unfamiliar with the book, but happy to learn that it was a favorite of Harry Potter creator J.K. Rowling and it also won a Carnegie Medal. (Not to mention a recommendation from the wonderful Foodycat too). A beautifully written fantasy about the young, orphaned Maria Merryweather, who travels with her governess Miss Heliotrope and Wiggins, her King Charles spaniel, to the mysterious and lovely Moonacre Manor to live with her uncle and new guardian, Sir Benjamin. The story follows Maria, the last Moon Princess, as she tries to solve the mysteries of Moonacre Manor, right the wrongs of her ancestors and bring happiness to the manor, the valley, her friends and family and herself. The book is full of vivid descriptions and imagery that enable the reader to envision the beautiful setting, the delicious food and the inhabitants of the manor, village and the valley surrounding it. There is a cast of colorful and imaginative characters, both human and animal in the novel and I think my favorite has to be Marmaduke Scarlet, the skilled and diminutive cook at the manor. Described by Goudge as a "little hunchbacked dwarf", with a "smile so broad that the ends of it seemed to run into his ears." Maria determines he must "be very old" because "the fringe of whisker that encircled his whole face like a ham frill was snow white, and so were his bushy eyebrows. Except for the whisker frill, his face was clean shaven, brown as an oak-apple, and criss-crossed with hundreds of little wrinkles." An excellent cook, Marmaduke Scarlet considers it very serious business; the kitchen is his private domain and it can only be entered by invitation. The endless array of delicious British food that Marmaduke Scarlett creates is sprinkled through the book and it was difficult to decide what to make. I finally decided on something I had heard of before but never tried, Syllabub, which Marmaduke makes for dessert while meeting Maria for the first time. Defined in The New Food Lover's Companion: "This thick, frothy drink or dessert originated in old England. It's traditionally made by beating milk with wine or ale, sugar, spices and sometimes beaten egg whites. It's thought that the name of this concoction originated during Elizabethan times and is a combination of the words "Sille" (a french wine that was used in the mixture), and "bub" (old-English slang for "bubbling drink")." In The Little White Horse, the syllabub is described as this, "Twelve eggs went to the making of the syllabub, a pint of cream, and cinnamon for the flavoring." The recipe can be found in Nigella Bites (page 207) or it is also at the Food Network site here. About this recipe Nigella says: "This hasn't got the temple-aching sweetness of Turkish Delight, not its palate-cleaving glutinousness, but rather it is a cloud-like spoon-pudding version that attempts to catch the aromatic essence." Combine the orange-flavored liqueur, lemon juice and sugar in a large bowl (I use the bowl of my freestanding mixer) and stir to dissolve the sugar, or as good as. Slowly stir in the cream then get whisking. As I said, I use my freestanding mixer to this, but if you haven't got one, don't worry - but I would then advise a handheld electric mixer. This takes ages to thicken and doing it by hand will drive you demented with tedium and impatience. Or it would me. When the cream's fairly thick, but still not thick enough to hold its shape, dribble with the flower waters and then keep whisking until you have a cream mixture that's light and airy but able to form soft peaks. I always think of syllabub as occupying some notional territory between solid and liquid; you're aiming, as you whisk, for what Jane Grigson called "bulky whiteness." Whatever: better slightly too runny than slightly too thick, so proceed carefully, but don't get anxious about it. Spoon the syllabub in airy dollops into small glasses, letting the mixture billow up above the rim of the glass, and scatter finely chopped pistachios on top. Notes/Results: Oh My! Syllabub is decadent and good without being too heavy; it is very light and fluffy. At one point as I was making it, I thought "OK, I am basically just making softer, runnier whipped cream here, what is exciting about that?" But after tasting this cloud-like concoction, I realized that it is on its own level entirely. The combination of the slight tartness of the lemon juice, the sweet and slightly bitter taste of the Cointreau and the floral essence of the rosewater and orange-flower water blend together so well. Then you have the fluffy, creaminess of the syllabub offset by the crunch of the ground pistachios on top. It is simple to make, other than requiring about 10 minutes or so of whipping to get it to the right texture without getting it too firm. I used my electric mixer and I have to say that it was lucky that Marmaduke Scarlet's arms "were much too big for the rest of him", since he had to whip his syllabub by hand! Although Nigella's recipe is more exotic in flavor than Marmaduke's syllabub, I figured that since he made saffron cake and other delicacies, he wouldn't mind the Turkish influence. In fact I garnished my syllabub with a little saffron too. I do think he would have had a big problem with Nigella herself and her habit of sneaking into the kitchen in the middle of the night for a snack and sticking her fingers into things! I really liked The Little White Horse, it was an enjoyable and easy read. Thanks to Rachel for selecting it! If you would like to join us at Cook The Books for this round, you have until June 26th to read the book and get your entry representing this book posted. For more details on Cook The Books, and to see our upcoming selections, (It is my turn to host the next round and we will be journeying to China for The Last Chinese Chef by Nicole Mones!), visit the CTB site here.
2019-04-20T02:26:21Z
http://kahakaikitchen.blogspot.com/2009/06/
The projected climate change signals of a five-member high resolution ensemble, based on two global climate models (GCMs: ECHAM5 and CCCma3) and two regional climate models (RCMs: CLM and WRF) are analysed in this paper (Part II of a two part paper). In Part I the performance of the models for the control period are presented. The RCMs use a two nest procedure over Europe and Germany with a final spatial resolution of 7 km to downscale the GCM simulations for the present (1971–2000) and future A1B scenario (2021–2050) time periods. The ensemble was extended by earlier simulations with the RCM REMO (driven by ECHAM5, two realisations) at a slightly coarser resolution. The climate change signals are evaluated and tested for significance for mean values and the seasonal cycles of temperature and precipitation, as well as for the intensity distribution of precipitation and the numbers of dry days and dry periods. All GCMs project a significant warming over Europe on seasonal and annual scales and the projected warming of the GCMs is retained in both nests of the RCMs, however, with added small variations. The mean warming over Germany of all ensemble members for the fine nest is in the range of 0.8 and 1.3 K with an average of 1.1 K. For mean annual precipitation the climate change signal varies in the range of −2 to 9 % over Germany within the ensemble. Changes in the number of wet days are projected in the range of ±4 % on the annual scale for the future time period. For the probability distribution of precipitation intensity, a decrease of lower intensities and an increase of moderate and higher intensities is projected by most ensemble members. For the mean values, the results indicate that the projected temperature change signal is caused mainly by the GCM and its initial condition (realisation), with little impact from the RCM. For precipitation, in addition, the RCM affects the climate change signal significantly. In the fourth assessment report (AR4) of the Intergovernmental Panel on Climate Change (IPCC) a global warming of about 0.2 K per decade for the twenty-first century is projected within the range of the SRES scenarios, with even larger increases for sub-regions, such as Europe (Christensen et al. 2007). For annual mean precipitation, an increase in most of Northern Europe and a decrease in most of the Mediterranean area are projected. For Central Europe, the AR4 GCM ensemble models show approximately equal projections of increases and decreases in annual mean precipitation, and weak signals. On seasonal scales, precipitation is likely to increase in winter in Northern and Central Europe, and to decrease in summer in Southern and Central Europe, but the models disagree on the magnitude and geographical details of the climate change signals. Thus, Central Europe is a region with large uncertainty for the mean state of future climate. In addition changes in the probability distribution of precipitation are projected (e.g. Frei et al. 2003; Boberg et al. 2009, 2010). For many climate impact studies, simulation results of regional climate simulations are an essential input. In particular impact studies investigating climate changes of natural hazards require high resolution meteorological forcing data. One example is hydrological simulations for the assessment of flood hazards in a changing climate, in particular for smaller and medium sized catchments. The increase in horizontal resolution enables a more detailed model simulation which usually provides better simulation results in the presence of complex fine scale topographical features and in simulating extreme events (Giorgi 2006). Furthermore, a better performance of the simulated spatial patterns and intensity distributions of precipitation is achieved (Boberg et al. 2010). The spatial resolution of RCM simulations has steadily increased over the last decades. For Europe, larger ensemble assessments of climate changes were carried out in the PRUDENCE project (Christensen and Christensen 2007) with a main resolution of around 50 km and the ENSEMBLES project (Hewitt 2005) with a spatial resolution of 25 km. Even higher resolutions have been carried out, e.g. with the RCM CLM at around 18 km in the so-called consortium simulations (Feldmann et al. 2008), with the HIRHAM model at 12 km within the PRUDENCE project (Christensen and Christensen 2007), and with the REMO model at 10 km within the framework of UBA (Umweltbundesamt) and BFG (Bundesanstalt für Gewässerkunde) projects (Jacob et al. 2007b) covering the region of Germany. The PRUDENCE projections of changes in precipitation show a north-south gradient with positive changes in the north and negative changes in the south and a transition zone which moves with the season and varies between the models (Christensen and Christensen 2007). For temperature, an increase is projected for all seasons all over Europe with largest warming in summer in the Mediterranean region. The analysis showed higher geographic details in the fields modelled and a tendency for less warming compared to the coarser GCM simulations. Furthermore, they noted that regional models with quite different biases (Jacob et al. 2007a) are much closer to one another in simulating climate change. Déqué et al. (2007) assessed the uncertainties of the PRUDENCE regional climate simulations. They found that the role of the GCM is generally greater than the role of the RCM, but for summer precipitation the uncertainty introduced by the choice of the RCM is of the same magnitude as the choice of the GCM. Furthermore, Boberg et al. (2009, 2010) found a clear relative increase of more intense and decrease of light and moderate precipitation days contributing to total precipitation for the scenario periods. We constructed a multi model ensemble of high resolution 7 km regional climate simulations for a present (1971–2000) and a near future (2021–2050) time period covering Germany and the near surroundings. Simulation periods are 1968–2000 and 2018–2050 which allows three years of spinup for each simulation. The ensemble is based on two GCMs (ECHAM5 and CCCma3) and two RCMs (CLM and WRF), and the simulations were performed within the CEDIM 1 project "Flood hazard in a changing climate" to assess the climate change impact on medium and small sized river catchments in Germany (Schädler et al. 2012). By including multiple GCMs and RCMs and also three realisations (referred to as R1–3 in text and figures) of ECHAM5, the ensemble samples some of the uncertainty involved in future projections due to the models used as well as natural variability. In total, five high resolution simulations were performed, CLM for all 4 GCMs and WRF for ECHAM5 R1. This ensemble is the largest set of RCM simulations for Germany at such high horizontal resolution for two 30 year time periods. Furthermore, the ensemble includes to our knowledge the first long-term regional climate simulation of the RCM WRF for Central Europe. In addition, two earlier simulations carried out with the REMO model at 10 km resolution (Jacob et al. 2007b) are included for the analysis and comparison. The near future time period 2021–2050 was chosen due to the scope of this project to investigate changes in flood hazard for a period which conforms with the planning horizons of water resource management systems. For this time period the projected climate change signal is minor compared to the last 30 years of the twenty-first century which is usually applied in studies investigating possible climate change. Furthermore, the choice of the emission scenario on the projected climate change signal is low for the near future time period. In Part I (Berg et al. 2012) of this two part paper, the ensemble was validated with observational data. The validation results showed the benefit of bringing high detail in the spatial patterns, and added value to the precipitation intensity distribution, especially for extreme events, as also previously seen in e.g. Boberg et al. (2010) and Frei et al. (2003). In the current paper, Part II, climate projections for temperature and precipitation for a near future time period are investigated together with possible added value of higher spatial resolution. The applied GCMs and RCMs are described briefly in Sect. 2. Section 3 presents a detailed analysis of the projected change signals of the GCMs and both nests of the RCMs. The paper closes with a summary and conclusions in Sect. 4. The ensemble is based on two GCMs and two RCMs, which are described in detail in Part I of this study (Berg et al. 2012) and only the main features of the models are repeated here. The two GCMs (ECHAM5 and CCCma3) were selected based on their performance on a global scale (Reichler and Kim 2008) and on their availability. For the dynamical downscaling two state of the art, non-hydrostatic RCMs (CLM and WRF) were chosen. Due to the large step in horizontal resolution between the GCMs (200–300 km) and the target resolution of 7 km, a double nesting approach is applied for each of the RCMs. The bulk of the simulations are carried out using the IPCC-AR4 simulations with the ECHAM5/MPIOM model system at T63 resolution (Roeckner et al. 2003). Three realisations, i.e. simulations with different initial conditions, of these simulations are used. In addition, one CLM simulation uses realisation four of the CCCma3 (Scinocca et al. 2008) at a T47 horizontal resolution. For both GCMs the responses to the IPCC SRES A1B forcing scenario are investigated in comparison to the twentieth century anthropogenic forcing only simulations. The CLM model (version 4.8) uses a Runge-Kutta time-stepping scheme, the radiation scheme of Ritter and Geleyn (1992) is called every hour, the Tiedtke (1989) scheme is used for convective mass flux parameterisation and the four species cloud scheme of Doms and Schättler (2002) provides prognostic precipitation. The WRF–ARW model version 3.1.1 (Skamarock et al. 2008) uses the WSM5 microphysical parameterisation (Hong et al. 2004; Hong and Lim 2006), the modified version of the Kain–Fritsch scheme (Kain 2004) for cumulus parameterisation, the Noah land surface model (Chen and Dudhia 2001), the YSU PBL parameterisation (Hong et al. 2006), Dudhia shortwave (Dudhia 1989) and RRTM longwave (Mlawer et al. 1997) radiation scheme. Both RCMs, CLM and WRF, follow a double nesting procedure with a coarse nest extending over all of Europe at around 50 km resolution, and the fine nest covering Germany and the near surroundings at 7 km resolution (see Fig. 1). Both models use 40 vertical levels for both nests. Results from previous simulations with the REMO model at a slightly coarser resolution are included in this study for comparison (Jacob et al. 2007b). The hydrostatic REMO model (Jacob et al. 2001) is based on the ECHAM4 physical package using radiation parameterisation of Morcrette et al. (1986), the Sundquist (1978) large-scale cloud parameterisation and the Tiedtke (1989) and Nordeng (1994) convective parameterisations. For the simulations for Germany, Jacob et al. (2007b) used a double nesting approach with a coarse nest of about 50 km and a fine nest of about 10 km, both with 27 vertical levels to dynamically downscale realisation one and two of the ECHAM5 GCM simulations. For the analysis of projected climate change between the present (1971–2000) and near future A1B scenario (2021–2050) time periods, the same domains are used as for the validation process. The analysis comprises entire Europe for the GCMs and the coarse nests and the political region of Germany for the fine nest (see Fig. 1). For the direct comparison of the different simulations, each of the model grids were bilinearly interpolated to regular grids of 0.44° for the coarse nest and 0.0625° for the fine nest. Due to the close agreement between the original models grids and the regular grid, the interpolation does not affect the results significantly. To investigate the evolution of the climate change signal and the corresponding uncertainties, the analysis includes the coarse resolution GCM and coarse nest RCM results. The focus of this paper is, however, on climate change projections for Germany of the high resolution fine nest simulations. To estimate the statistical significance of the climate change signals, Student’s t tests were performed, and presented with 95 % confidence intervals in the figures. For the tests, annual means were used in order to have independent and identically distributed data. For temperature, the linear trend within the two 30-year time periods was first removed so that the time-series become stationary and the distributions remain normal. Before applying Student’s t test, the data were tested for normality. The test results showed that the assumption is fulfilled unless indicated otherwise. Due to the near future time period from 2021 to 2050 of this study the climate change projections of this ensemble can only be compared qualitatively to most previous results (e.g. IPCC-AR4 and PRUDENCE), which usually selected the last 30 years of the twenty-first century as future time period. The projected annual mean temperature changes over the coarse nest between the time slices 1971–2000 and 2021–2050 are shown in Fig. 2. All GCMs, the three realisations of ECHAM5 (in figures named E5R1–3) and the CCCma3 (C3), project a significant warming over Europe. The range of the areal average varies between 1.1 and 1.5 K. Both the CLM and WRF downscaling simulations indicate a similar warming, however, with lower magnitudes in the range of 0.9 and 1.3 K for the areal average. No additional impact of the bias was found. On a seasonal scale an increase of mean temperature is present for all applied GCMs and RCMs (not shown). The warming in Northern and Central Europe is likely to be largest in winter. The projected temperature changes for Europe on an annual and seasonal scale as well as the tendency of less warming of the RCM results in comparison to the GCMs agree with previous results, e.g. from PRUDENCE (Christensen and Christensen 2007). Furthermore, the projected temperature range (areal mean between 1.1 and 1.5 K) is also narrower than the bias range (−1 to −5 K for areal means) analysed in Berg et al. (2012). Besides the overall warming, the patterns and magnitudes of the projected changes show larger impact of the GCM and its realisation on the simulation results compared to the RCM, which agrees with previous results (e.g. Déqué et al. 2007). The changes in annual precipitation over the coarse nest are shown in Fig. 3. The south-north contrast in precipitation changes across Europe, which is described in Christensen et al. (2007), is also indicated by the GCMs applied in this ensemble. The projected changes of the three realisations of ECHAM5 differ in their magnitude, but the overall pattern is similar. For Germany, the different realisations of ECHAM5 produce varying magnitudes of precipitation increase. The CLM downscaling of ECHAM5 indicates a similar pattern of changes in annual precipitation compared to the driving GCM, whereas WRF downscaling tends to a positive annual precipitation change. The climate change signal of CCCma3 is in the same range as that of ECHAM5 R3, but in this case the CLM downscaling intensifies the climate change signals compared to CCCma3. On a seasonal scale precipitation change patterns of the ensemble members for winter are similar to the annual one, but for Northern Europe the projected increases are even larger (not shown). For spring, positive precipitation change signals are also projected for Central and East Europe except for ECHAM5 R3. In summer, the projected decrease of the precipitation signal is more extended in space and magnitude compared to the annual one except for the WRF simulation which produces an increase in mean precipitation from the north-east of the domain into Central Europe. In autumn, a stronger precipitation decrease for the Mediterranean region is simulated compared to the annual results. The transition from positive changes in the north to negative ones in the south moves with the season and varies between the ensemble members. The varying transition and similar precipitation change patterns for Europe were also found in previous studies, e.g. in PRUDENCE (Christensen and Christensen 2007; Déqué et al. 2007). No clear impact of the bias on the projected results could be found. Overall, the GCM and coarse nest RCM analysis shows that the impact of different GCMs on the simulation results is in the same order of magnitude as the applied initial conditions (realisations) of the GCM. Furthermore, the impact of the RCM on the climate change signal is more dominant for precipitation compared to temperature, which was also concluded for the PRUDENCE simulations (e.g. Déqué et al. 2007). Note, however, that precipitation is simulated differently by the different models as it is a sum of multiple processes within each single model. The end results can therefore differ between a GCM and an RCM due to the parameterisations used, and are not necessarily a result of resolution. CLM and ECHAM5 use a more similar parameterisation of precipitation, i.e. the Tiedtke scheme (Tiedtke 1989), and have similar results, whereas the WRF simulation uses the Kain–Fritsch scheme (Kain 2004), which might explain the different result for summer, as described above. The validation results with ERA40 boundary conditions (Berg et al. 2012) have also shown largest differences in summer precipitation between the two RCMs CLM and WRF. Changes in the seasonal cycle and annual mean temperature averaged over Germany are listed in Table 1 for all applied GCM-RCM combinations. Statistically significant temperature changes at the 95 % confidence interval are indicated in bold font. All simulation results show a mostly significant warming for all seasons and consequently also on the annual scale. The RCM simulations with ECHAM5 driving data project a warming between 0.8 and 1.3 K. The CLM simulations with CCCma3 driving data are in the same order with a projected warming of 1.1 K. On the seasonal scale, the ECHAM5 R1 driven RCM simulations show similar temperature changes with largest warming in winter and autumn. The REMO and CLM simulation results are comparable for both realisations of ECHAM5. The CCCma3 driven simulation shows a weaker increase in winter, but otherwise similar results as the ECHAM5 driven simulations. The ensemble mean values project larger warming in winter and autumn compared to spring and summer for Germany. The intra-ensemble standard deviation values indicate higher variability of projected warming of the different ensemble members in winter and spring. In accordance to the coarse resolution results, the range of the projected climate change signals of the fine resolution ensemble over Germany (see above) is narrower on seasonal and annual scale than the bias range [annual: −2.9 to 0.5 K (Berg et al. 2012)] of the ensemble. In general, simulation results using the same GCM (here ECHAM5 R1 or R2) indicate that the RCM impact on the climate changes is relatively small on annual and seasonal averages. In contrast, the selection of the GCM and its initial condition (realisation) results in significantly larger variabilities of projected temperature change for both seasonal and annual averages. The spatial distribution of annual mean temperature change over Germany of the fine nest is shown in Fig. 4. All RCM simulations project an annual mean warming over Germany, which is significant at the 95 % confidence interval for almost all grid points with a few exceptions in the south for the WRF simulation with ECHAM5 R1 driving data. For the ensemble mean the warming varies spatially between 0.9 and 1.3 K and an average of 1.1 K is projected for Germany. Some added small scale details are seen compared to the coarse nest simulations, but generally the patterns are the same. From the ensemble presented here, it is not possible to find any robust differences between the mean warming in different regions of Germany. For precipitation, the projected changes on seasonal and annual scales averaged over Germany are listed in Table 2. The seasonal and annual climate change signals of the ensemble members vary in both sign and magnitude. All ECHAM5 driven RCM simulations project an increase of annual precipitation in the range of 2 to 9 %. Here, minimum and maximum changes correspond with the same realisation of ECHAM5, which indicates a large impact of the RCM on the climate change signal of precipitation. The CLM simulations with CCCma3 driving data project a decrease of −2 % of annual precipitation over Germany, which, although not significant, could indicate a larger impact of using different GCMs. In contrast to the temperature change signal, only the ECHAM5 R1 driven WRF simulation and the ECHAM5 R2 driven CLM simulation show significant annual precipitation changes at the 95 % confidence interval. The RCM simulations using ECHAM5 R1 indicate largest precipitation increases in spring, in particular in March with values larger than 20 % (not shown), and autumn. In winter and summer, the climate change signals of the RCM simulations using ECHAM5 R1 vary also in sign. In winter, CLM and WRF project, in contrast to REMO, a precipitation increase and in summer CLM and REMO project, in contrast to WRF, a precipitation decrease. Different realisations of the GCM impact the RCM results significantly, which is more distinctive for CLM compared to REMO. The CCCma3 driven CLM simulation projects larger decreases of precipitation in summer compared to the ECHAM5 driven simulations. Most of the seasonal precipitation change signals are non-significant. In spring, two of seven ensemble members, and for the other three seasons only one member, project a significant precipitation change at the 95 % confidence interval. The ensemble means in Table 2 project positive precipitation changes for winter, spring and autumn and a negative change in summer. The intra-ensemble standard deviation values indicate higher variability of the projected precipitation change in winter. Except for the winter season, the RCM simulations with quite different biases [+30 to +60 % for annual means (Berg et al. 2012)] project climate change signals which are much closer to each other (−2 to 9 % for annual means). The results also confirm the large variability of the magnitude and geographical details of the climate change signals for precipitation for Central Europe as described in Christensen et al. (2007). The corresponding spatial distributions of annual precipitation change over Germany of the fine nest are shown in Fig. 5. The overlaying contours indicate regions with statistical significance of the climate change signals at the 95 % confidence interval. According to the results of Table 2 the projected annual precipitation changes are significant for large regions of Germany for the ECHAM5 R1 driven WRF simulation and the ECHAM5 R2 driven CLM simulation, and basically no significant regions for the other simulations. No similarities in the mean precipitation change patterns can be seen for the ensemble members. For the ensemble mean, the precipitation change varies spatially between −1 and 7 % and an average of 3 % is projected for Germany. The comparison of the projected changes for mean temperature and precipitation of the coarse and fine resolution RCM simulations over Germany allows an estimation of the added value of high resolution regional climate simulations. In general, the projected climate change signals of the coarse domain are transferred to the fine resolution without strengthening or weakening the climate change signal. But the higher resolution adds some more detail in the spatial patterns. Furthermore, the climate change signals do not per se show the benefit of high-resolution regional climate simulations in bringing high detail in the spatial patterns and added value to the precipitation intensity distributions (see Berg et al. 2012). Figure 6 shows the projected changes of the probability density function (PDF) of daily precipitation intensities, from here on defined as days with at least 0.1 mm of precipitation. In general, the intensity distributions of the CLM and WRF models are comparable indicating a decrease of lower precipitation intensities and an increase for higher intensities. Similar changes in the intensity distribution of precipitation were also found in previous studies, e.g. Boberg et al. (2009, 2010). The change point is approximately at 6 mm/day. WRF produces slightly higher decreases of lower intensities and higher probabilities in particular for moderate intensities in the range of 10–20 mm/day compared to the CLM driven ECHAM5 R1 simulation. Different realisations of the GCM impact the precipitation PDFs somewhat, see e.g. CLM results of ECHAM5 R1 to R3. The precipitation PDFs of the REMO simulations differ significantly in comparison to the CLM and WRF results driven with the same realisations of ECHAM5. In general, smaller changes of the REMO PDF are projected for the low, moderate and higher intensities up to 40 mm/day, but intensities above 50 mm occur more frequently within the REMO simulations using ECHAM5 R1. The REMO simulation using ECHAM5 R2 projects an increase of lower intensities, and thereby deviates strongly from the other simulations. The deviation of the REMO model in the projected changes might be a reflection of the bias in the REMO precipitation intensity distribution presented in Berg et al. (2012). When CCCma3 is used as GCM, the projected precipitation PDF differs significantly from the ECHAM5 simulations, however the general trend of a decrease of lower intensities and an increase of higher intensities is also present, but first the magnitude of the projected change is less, and second the change point is shifted to approximately 11 mm/day. Again, the differences in the CCCma3 driven simulation could be due to the bias in the precipitation intensity distribution as presented in Berg et al. (2012). The shift of the change point is an interesting result in comparison to Boberg et al. (2009, 2010), where the change point was found to be remarkably similar between different GCM and RCM combinations. Overall, all components of the multi model ensemble, the GCM and its realisation as well as the RCM, impact the projected change of the probability density functions of precipitation significantly. The analysis of the probability density functions indicates already in general an increase of higher precipitation intensities for all ensemble members. For the investigation of regions which are projected to be more affected by heavy precipitation events in the future, the spatial distribution of the projected percentage of wet days in 2021–2050 with precipitation amounts larger than the 95 percentile of the reference period 1971–2000 are shown in Fig. 7. Hence, for regions with values larger than 5 % (blue) the 95 percentile value of the present time period occurs more frequently in the future time period. And accordingly for regions with values smaller than 5 % (red) the 95 percentile value of the present time period occurs less frequently in the future time period. All RCM simulations project a mean probability increase of the 95 percentile value over Germany. The ensemble mean indicates a probability increase for each grid point in Germany in the range of 0 to 1.3 % units, and an average of 0.5 % units. For the individual ensemble members the projected climate change signal is less clear. The corresponding spatial distributions are very heterogeneous. In particular the REMO simulations also project regions with a probability decrease of the 95 percentile value of the reference period. The CLM and WRF simulations with ECHAM5 driving data show larger regions with significant changes of the 95 percentile value of the present time period. There are, however, no robust patterns between the ensemble members to indicate vulnerable regions. In general, the results indicate that all components of the multi model ensemble (GCM, its realisation, or RCM) impact the probability changes of high precipitation events significantly. Furthermore, it is worth mentioning that the spatial distribution of the projected change of the 95 percentiles differs from the mean annual precipitation change patterns. For the above analysis of projected climate change in precipitation intensities only wet days are considered. But also the projected differences in the number of wet days or dry days respectively is central to climate change assessments. The spatially averaged values over Germany of the climate change signal of the number of wet days on seasonal and annual scales are listed in Table 3. For the ensemble mean the number of wet days is projected to increase in spring (5 %) and decrease in summer (−4 %). Contrary to these projections are the projected increases of the ECHAM5 R1 driven WRF and ECHAM5 R2 driven REMO simulations in summer and the decrease of the ECHAM5 R3 driven CLM simulation in spring. For winter and autumn the climate change signals of the ensemble members compensate to approximately no change. Most of the projected changes on the number of wet days are non-significant. On an annual scale, only the CLM driven CCCma3 simulation shows a significant decrease at the 95 % confidence interval which is mainly due to the reduction of the number of wet days in summer. For the ECHAM5 simulations only one of six members show either a significant increase of the number of wet days in spring or a significant decrease in summer. The results show that, according to the projected mean precipitation changes, all model components impact the projected changes of the number of wet days significantly. Changes in dry periods are even more important than the number of single dry days for many climate impact studies, in particular for agricultural research. Hence, exemplarily the projected percentage change of dry periods of more than 5 consecutive days over Germany are shown in Fig. 8. The projected climate change signals of the ensemble members vary in sign and magnitude. On average over Germany, four members indicate an increase and three a decrease of dry periods for the future time period. Furthermore, the corresponding spatial distributions differ significantly ranging from very patchy to more homogeneous signals. The ensemble mean indicates, except for the north-west part, an increase of the number of dry periods of more than 5 days with an overall average of 3 % for Germany. The time-series of the projected climate change signals are in this case not normally distributed, thus the Student’s t test was not applied. In general, the results indicate distinctive variations with respect to the selection of the GCM, its realisations and the RCM. The projected climate change signals of a multi model ensemble based on two GCMs (ECHAM5 and CCCma3) and two RCMs (CLM and WRF) were presented. The presented ensemble of regional climate simulations is characterised by its high spatial resolution of 7 km using a two nest procedure over Europe and Germany and to our knowledge the first long-term regional climate simulation of the RCM WRF for Central Europe. The ensemble was extended by two ECHAM5 realisations downscaled with the RCM REMO at a slightly coarser resolution of 10 km. The simulations were carried out for the present (1971–2000) and future A1B scenario (2021–2050) time periods. All GCM simulations project a significant warming over Europe on seasonal and annual scales, which is transferred to both nests of the RCMs. For precipitation, all GCM simulations project an increase of annual precipitation in Northern and a decrease in Southern Europe. For both variables, the impact of the two different GCMs on the simulation results is in the same order of magnitude as that due to the applied initial conditions (realisations) of a single GCM. The impact of the RCM on the climate change signal is more dominant for precipitation compared to temperature. In comparison to the GCM climate change signals, the RCM simulation results tend to less warming. The projected temperature and precipitation changes for Europe as well as the different impacts of GCMs and RCMs on the climate change signals agree with previous results (e.g. Christensen et al. 2007; Déqué et al. 2007). For the fine nest, all simulation results project a significant annual warming over Germany in the range of 0.8 to 1.3 K and an average of 1.1 K for the future time period. The results indicate that most of the variability of the projected temperature change is caused by the GCM and its initial condition (realisation). For mean precipitation the climate change signal of the fine nest is less clear. The selection of the GCM impacts both the sign and magnitude of the projected change. The selection of the RCM also impacts the climate change signal significantly. Over Germany changes of annual precipitation in the range of −2 to 9 % and an ensemble mean of 3 % are projected. The wet day precipitation intensity distributions project a decrease of lower intensities and an increase of moderate and higher intensities for most ensemble members. But the results show that the projected changes of precipitation intensities vary significantly for the different ensemble members. In contrast to previous studies (Boberg et al. 2009, 2010) also the change point between decreases at low intensities and increases at higher intensities is not model independent. The climate change signal of the number of wet days projects annual changes in the range of ±4 % for the future time period with an ensemble mean increase in spring of 5 % and a decrease of −4 % in summer. The projected changes in the number of dry periods of more than 5 consecutive days indicate varying climate change signals for the ensemble members in the range of −4 to 13 %. The analysis shows that the range of projected precipitation changes within the ensemble is the result of significant variations in the wet day precipitation intensity distributions as well as in the number of dry days and dry periods of the ensemble members. The significance tests of the changes in mean temperature show a robust increase for all ensemble members. In contrast, the significance tests of the changes in mean precipitation, heavy precipitation and the number of dry days show that none of them offer robust results. Often only one or two members of the ensemble show significant results, and they often disagree on both the sign and magnitude of the changes (see Tables 2, 3). Changes in these variables are thus highly uncertain. Altogether, the analysis of this ensemble in simulating present climate in part I (Berg et al. 2012) and projected climate changes in this paper (part II) have shown the potential and benefit in bringing high detail in the spatial patterns and the added value in particular to the precipitation distributions even though the simulations suffer from biases in most variables. Despite the different biases of the regional climate models, the range of projected climate change signals for temperature and precipitation are much closer. Subsequent climate impact studies have to be aware of and cope with these uncertainties. Ensemble approaches are recommended to make the variations and uncertainties of the projected climate change signals visible. Center for Disaster Management and Risk Reduction Technology; http://www.cedim.de. The authors acknowledge funding from the CEDIM-project "Flood hazard in a changing climate". The RCM simulations were carried out at HLRS at the University of Stuttgart within the project "High resolution climate modelling" for CLM and the project "High resolution regional climate modeling for Germany using WRF" for WRF. We would also extend a great thank you to the CLM and WRF-modeling communities, particularly H.-J. Panitz and J. Werhahn of IMK for help with the RCM simulations. The REMO simulation data was downloaded from the CERA online archive, and our appreciation also to the REMO modeling group at MPI-M in Hamburg. We also appreciate the work of the R Development Core Team (2011) and the developers of CDO, and thank the anonymous reviewers for their valuable comments.
2019-04-19T06:57:44Z
https://link.springer.com/article/10.1007%2Fs00382-012-1510-1
Today is Data Privacy Day; Congress voted quietly last year to have the United States join Europe in designating January 28 as an annual, international holiday to raise awareness about the importance of data privacy protection. Just don't tell the social media executives meeting in Davos this week. There, halfway 'round the world from Rotenberg's Washington, D.C., office—and while Rotenberg was commemorating Data Privacy Day in press briefings and other talks elsewhere—the chiefs of Twitter, Facebook, MySpace, Ning and LinkedIn squeezed into a small, packed anteroom at the World Economic Forum to share their predictions for social networks, discuss their impact on society (for better or worse) and ponder why their companies hadn't yet figured out a way to make big money off their subscriber's digital social connections. Tim Berners-Lee, the British physicist who invented the World Wide Web, told the Davos gathering that "little changes in how your treat privacy can dramatically affect the way a social network works." He said that in eBay's case, for example, the site has increased privacy in some areas as the online auction site has matured. The site now hides the identity of people bidding against each other. Younger users, though, seem far more open to revealing personal details about themselves. Then it was Reid Hoffman's turn. The executive chairman and founder of LinkedIn told the group: "All these concerns about privacy tend to be old people's issues." Transparency and accessibility are two reasons, he said, that so many younger users—teenagers and young adults—put their mobile phones on Facebook or MySpace. "The value of being connected and transparent is so great," Hoffman said, that privacy is not a concern but a hindrance. Rotenberg wasn't present. But Don Tapscott, author of books on the so-called Net Generation and the need for corporate transparency in the Digital Age, took Hoffman on. Social networking, Tapscott said, would become what "we want it to be" over time, meaning that if we wish to build civic values into social network sites, we will—and should. "[The Internet] has an awesome neutrality and we need to build into it basic human values," Tapscott said. "...And one of those values is the right to informational privacy and the right to be left alone. I completely reject this view that privacy is dead. It's in deep trouble, it needs to be saved and everyone needs to get involved to protect their own information." What do you think? Is privacy an "old people's issue" or more about civil liberties in the 21st century? Let us hear from you. Virtual Worlds: The New Green? For more on Drax, see Mo' Real, a short profile and report by Drax on President Obama's health care reform plans that appeared here, on the Cause Global blog early last year. The annual World Economic Forum got under way today in Davos, with 2,500 of the world's business and political elite gathering at the famed Swiss mountain resort under the apt new theme, "Rethink, Redesign, and Rebuild." It is a chastened crowd: WEF Founder Klaus Schwab told reporters the Forum is being held this year "in a mood of reflection" and that the global economic crisis "is really reflective of a deeper crisis of values, overall." Indeed, there is much talk in Davos this year about the need to fast-forward social enterprise and social innovation, and to more urgently use social media to crowdsource new solutions for many of society's ills—whether in Haiti or closer to home. There also is a hunger among attendees for examples of social innovation that works, and with them, the reassurance that change-for-good can be replicated and scaled globally. Indeed, the sense of euphoria about globalization that has marked Davos for years has been muted by the global financial crisis, making the 2010 gathering a call for a more civic global society to insure the sustainability and advancement of global leadership. Some people have returned to Davos this year, says writer David Ignatius of The Washington Post, "because with something as ephemeral as globalization, I think there's a desire to actually touch it, feel it, stand next to its fellow members in line for a cup of coffee or the men's and ladies' rooms. Floating in a networked world, you want to believe that there are real people with hands on the controls, a real place, anchored in the snow and ice each January..." * A meeting of Young Global Leaders, one of WEF's 16 communities holding gatherings this week in Davos, cited three cause-activism trends to watch in 2010. First, given Haiti aid groups' success, expect a surge in the NGO and nonprofit use of mobile and text-messaging applications to raise fast money for a cause. Second, watch for a continued rise in "slacktivism" -- the use of easy, click-to-donate and online volunteering tools by micro-donors to battle social problems. Third, expect to see more nonprofits and social enterprises using "crowdsourced philanthropy" in the form of recent contests such as the Chase Community Challenge, the Pepsi Refresh Project, and the American Express Members project. * A dinner meeting of 30 top social entrepreneurs urged more coordinated efforts to solve global problems and educate people in developing countries how to use technology to better their lives. Dinner attendees included Martin Fisher of KickStart, who noted that technology innovations don't tend to be adopted by poor rural farmers, so time-limited "smart subsidies" might be used, instead, to encourage the adoption of these innovations until the commercial markets can take over. Also attending were Harish Hande of Selco of India, and Andreas Heinecke of Dialogue in the Dark, a social enterprise that employs blind people to host exhibits and business workshops in total darkness, where participants learn new communications perspectives. * Nike formally launched GreenXchange at a CEO breakfast. The Xchange is a Web-based marketplace where companies can collaborate and share intellectual property, which can lead to new sustainability business models and innovation. Ten organizations have already signed on; we'll be looking more closely at this group in a future post. In the United States, the time spent on social media sites has increased even more, the survey says. Total time spent on network and blog sites has increased 210 percent in the last year, with the average person spending 143 percent more time on these sites than they did a year ago. The U.S. is second only to Australia in time spent on social media, the Nielsen survey says. Social media activity ranks as the most popular, the pollsters said, with gaming and instant messaging the next most popular. Facebook emerged as the leading destination in the category, with 209.8 million unique users, or 67 percent of the world's Internet population. And that's not all: Twitter continues to grow at a faster pace than any other social media destination, with 579 percent more unique users in December 2009 than in the same month in 2008. But social media site-runners, beware. Nielsen says there is reason to suspect that Twittering has peaked, in that traffic to the site actually declined 5 percent in December. Meanwhile, social networks Classmates and LinkedIn also showed a December decline, year-over-year. Says Nielsen: "This is an impressive an undeniable shift in user mindshare that no publisher can ignore. It underscores the importance of media brands learning the language of social media and having an active presence in the social networking places that users are now treating like their Internet portals." So far, the billionaire philanthropist is following 42 people, including Vinod Khosla, the venture capitalist, celebrity Ryan Seacrest, commentator George Stephanopoulos, New York Times columnist Nicholas Kristof, President Barack Obama, and dynamic data whiz and public health professor Hans Rosling, for starters. Gates's move to social media, like Microsoft's move to the Web, comes late in the game. But Gates-watchers are taking a better-late-than-never stand, saying his conversion will undoubtedly lend his foundation more of a "cool factor" as well as help to further soften Gates's anti-social image. * Gates cited the San Francisco startup, Academic Earth, as among new education sites that will "revolutionize education" by offering students personalized learning experiences. Academic Earth, called by TechCrunch a "sort of Hulu for education videos," provides a user-friendly platform for educational video that offers courses and lectures from Yale, MIT, Harvard, Stanford, UC Berkeley, Princeton and others. Gates says his foundation will be investing in online courses that are able to provide interactive applications for children; he says he's also working to ensure that all libraries have computers with Internet access. * Endorsed President Obama's push to double foreign aid giving, to make sure "the United States will get up into a very respectable range" of giving compared with other wealthy nations. He admonished Italy, specifically, for lowering its foreign-aid budget, saying that in June, he met personally with Prime Minister Berlusconi to make the case for more support, "but I was unsuccessful," Gates wrote. "This is a huge disappointment since I still think the Italian public wants to be as generous as people in other countries." Much has been said about how the mobile Internet is helping to imbue our everyday lives with a restless sense of urgency, for better or worse—what Twitter tummler Jeff Pulver calls "the State of Now." But this week, given Haiti's troubles, it's clear that there's one upside of this "nowism" that has become irrefutable: mass-mobile, location-aware micro-donations to people in need around the globe. Not convinced? "Never before have people donated money to disaster relief at the scale and speed and ease as they have in response to the Haiti earthquake," nonprofit consultant Lucy Bernholz wrote today on her blog, Philanthropy 2173. "Technology changes so quickly, that we have almost entirely new platforms to deploy for each new disaster; each (disaster) is the 'biggest, fastest' example ever of using the platform of the moment (to give aid)." This time around, Twitter, texting and social networks led and created swarms of instant-givers. And what a mobile outpouring it's been: according to the Mobile Giving Foundation, donations made via mobile phones to Haiti rescue efforts during the first 36 hours after the quake had topped $7 million. That tally included all the short message codes managed by the organization, and it's a mobile giving record for funds raised for a single cause. Meanwhile, the American Red Cross—despite the criticism it got during Hurricane Katrina for telling donors their money would be used in New Orleans, when it sometimes wasn't —says it has raised $7 million so far for Haiti through $10 "text" donations. It is coordinating its first-ever texting campaign with a mobile donations firm called mGive, and the outpouring is part of a larger surge of money flowing into international Red Cross coffers for the devastated nation: nearly half of Red Cross donations to Haiti since the quake have come in via texting. As of Thursday night, the Red Cross had raised some $35 million via mobile texts and Twitter blasts and Facebook appeals. And that's not all. Everyone's now joining the mobile aid party, aware that speed-of-giving is critical as hundreds are now dying in Haiti by the hour. Mashable is reporting that Skype has sent $2 vouchers to all of its customers in Haiti, allowing them to make up to one-hour's worth of calls to the United States. T-Mobile, meanwhile, has dropped all charges for calls and texts to Haiti through the end of the month, while other carriers are waiving charges for "donation texts." Text-message donation campaigns will, no doubt, become the first line of response for many more cause activists in the months and years ahead. To be sure, "old-fashioned" online giving will still outweigh mobile giving this year. But the takeaway here? Mobile giving is reaching whole new legions of people, from many who may not have given anything before texting made it so easy. Text-donations are giving the "urgency" trend in the marketplace—and donors, at least—a whole new way to define "instant gratification." For more on mobile aid to Haiti, see MobileActive.org's post, Earthquake in Haiti: How You Can Help , which is all about managing your inner "now" for Haitians who need all the urgency you can muster. How are you and/or your companies using mobile to help Haitian quake survivors? Is it tapping donors who might never have given before? Let us know and we'll share your work in a later post. The latest Global Entrepreneurship Monitor was launched today in Santiago, Chile, and it’s showing that young social entrepreneurs are rising in influence and number across the world, chiefly in innovation-driven economies. It was the first time that the 10-year-old survey partnership between the London Business School and Babson College—the largest single study of entrepreneurial activity in the world—included a measure of new social enterprise activity in its annual survey. * More men than women started socially oriented ventures. * Social entrepreneurs tended to be active at younger ages than business entrepreneurs. * Better-educated individuals were more likely to be social entrepreneurs. * Social ventures were started in a variety of areas—notably education, health, culture, economic development, and the environment. * The distinction between “social” and “regular” entrepreneurship was sometimes blurred. However, GEM says that social objectives (not-for-profit and hybrid social enterprises) still prevailed over more economic (for-profit social enterprises) and less innovative ones (traditional NGOS). * Social entrepreneurial activity was much lower than traditional entrepreneurial activity in almost all countries surveyed, though in some countries (chiefly Peru, Colombia, Venezuela and Jamaica), there was a significant overlap of social and business entrepreneurship, suggesting that 'social' and 'busines' entrepreneurship categories may be blurred. * Social entrepreneurs differed widely in the type of organizations they launched and the kind of social or environmental problems they tried to solve. Social enterprises identified in the report spanned a broad spectrum of categories, including education, health, culture, economic development and the environment. * There were differences in social issue focus among the countries, depending on their level of economic development. "Social entrepreneurs in (developing) economies tend to focus on more elementary issues and pressing needs such as basic health care provision, access to water and sanitation or agricultural activities in rural areas," the report said. "In innovation-driven economies, individuals are particularly active in launching culture-related organizations, providing services for disabled people, focusing on waste recycling and nature protection or offering open-source activities such as online social networking." Across all 49 countries surveyed, not-for-profit social enterprises were most prevalent (24%); followed by hybrid for-profit/nonprofit models; for-profit social enterprises (12%), and traditional NGOs (8%). There were regional preferences: hybrids, for example, were most popular with social entrepreneurs in the Scandinavian countries of Finland and Iceland, as well as in Algeria, Uganda, the Dominican Republic, Hungary, Latvia, Malaysia, Belgium, France, the Netherlands, Slovenia and Switzerland. Meanwhile, the for-profit social enterprise model is most favored, GEM says, by the United Arab Emirates, Venezuela and Romania. The earthquake in Haiti is becoming another important test of social media in advocacy: Catholic Relief Services is using Skype and Facebook in its efforts (phone service is down), while Oxfam is making use of the audio blog site ipadio, so its people can broadcast live updates about their efforts in and around the disaster zone. Ushahidi, a social media platform that crowdsources and maps crisis data, also got off to a running start, deploying a site for Haiti—Haiti.ushahidi.com—within hours of the quake. Ushahidi's goal in Haiti: to provide people with real-time information about any quake-related violence as well as up-to-the-minute data on where to find the closest doctors, supplies, medicine, and shelter. Ushahidi—which means "testimony" in Swahili—was initially created as an early-warning system amid the savage, inter-tribal violence that followed the Kenyan presidential election in late 2007. A government ban on live media throughout that crisis made Ushahidi one of the only places where citizens could share information about the attacks. In Haiti, Ushahidi is again producing "heat maps"—visualizations of places where civic passions overheat or where help is most concentrated and available. If Haitians can "see" where violence or aid is concentrated in real time during the crisis, says cofounder David Kobaya, they can manage their survival more effectively. Further, those sending aid can target it more precisely to the areas that need it the most. Ushahidi asks citizens to call, text, or email site editors with eye-witness reports or accounts passed along from people on the ground; the nonprofit then aggregates the reports and makes a map, which is posted and updated in close to real-time. The more people who send in information, the better; Kobaya says more information tends to verify itself over time. For more on Ushahidi, see "Mob Protection," a Cause Global profile of Ushahidi from September 2008. * Haitian singer Wyclef Jean sent out the following appeal on Twitter before most aid agencies knew whether their people on the ground were safe: "Haiti needs your help if you r in the US text Yele to 501 501 and 5 dollars will go toward earthquake relief in Haiti. International donations can be made @ http://www.yele.org." * techPresident reports that SMS donations to the Red Cross are being passed through without any carrier fees or processing fees, with the Mobile Giving Foundation and MGive handling the transactions and declining to take a cut. Texting HAITI to 90999 sends $10 USD to the Red Cross. Donations are flooding into the Red Cross by text message: within a few hours of operation Wednesday, the program had raised $750,000 from about 75,000 individual contributions, according to Red Cross officials. According mocoNews.net, a news blog covering digital media, mGive's co-founder and chairman, James Eberhard, was awakened in Pakistan by U.S. State Department social media advisor Alec Ross to get the short code up and running. Since then, #Haiti and #RedCross have all become major trending topics on Twitter, mocoNews reports. * The State Department is tweeting updates from the ground and from Washington. * Partners in Health, led by Dr. Paul Farmer (Mountains Beyond Mountains), is leading an aggressive online fundraising drive for the country in which it has been working for many years now. *Hundreds of tweets per minute have been pouring into Twitter's #haiti hashtag feed, providing added perspectives to the digital narrative of the suffering, and Twitter user @troylivesay, based in Port-au-Prince, has been posting updates of the aftermath. His tweets have included: "Leaving to look for a list of people; will try hard to report back" and "church groups are singing throughout the city all through the night in prayer. It is a beautiful sound in the middle of a horrible tragedy." * GlobalVoicesonline.com, a crowdsourced news site from citizen journalists around the globe, has been providing first-person accounts from the ground and independent news stories since minutes after the quake. * Twitpics—photographs taken on mobile phones and transmitted instantly around the world via Twitter—are helping to provide a visual narrative of the suffering, both to the public at large as well as to established news organizations, including CNN, which otherwise would not have had access to immediate video and photographs of the devastation. @CarelPedre, the Twitter handle for Carel Pedre—one of Haiti's most popular radio and TV hosts—has been sending out dozens of photographs (including the one illustrating this post, above). Check updates on his Twitter feed. Today, three young social entrepreneurs -- setting a radical precedent in the social innovation sector -- announced that they are offering up a portion of their future income in exchange for immediate resources to scale their social enterprises. The trio has created a Web site and a name for their request -- the Thrust Fund. They announced their bold move today on the Social Edge Web site in a post entitled, "Invest in Me, Take My Equity." The three entrepreneurs are: Saul Garlick, 26, founder of ThinkImpact, a startup nonprofit that connects American students to rural villages in Africa to alleviate poverty; Kjerstin Erickson, 26, the founder of FORGE and a blogger on Social Edge, and Jon Gosier, 28, the founder of AppAfrica, a social venture investing in African software entrepreneurs to create jobs and build their own companies. "(We) are announcing that we are ready to do something we had never heard of one month ago," the post reads. "We are going to offer equity in our life's earnings for an unrestricted infusion of cash today." Gosier and Garlick are each offering 100 shares in themselves, priced at $3,000 USD per share, to raise $300,000 each in exchange for 3 percent of each man's future earnings; Erickson is "selling" 200 shares in herself at $3,000 per share for a total capital investment of $600,000, in exchange for 6 percent of all of her future earnings. Interested investors are invited to fill out and sign a contract that further stipulates the terms of the unusual offer. The idea of 1-to-1 investing isn't new in the nonprofit sector. But it's just starting to take off in the social enterprise space. The other week, in a piece for this blog entitled "Mainstream Medicis," I wrote about how one social investor had decided to give a young entrepreneur he believed in some investment capital in exchange for a percentage of her future earnings. That move, detailed by investor/entrepreneur/tech consultant Rafe Furst last fall in his personal blog, has spawned considerable discussion across the sector in recent weeks, but today's Thrust Fund announcement was the first time that any social entrepreneurs have stepped forward to offer themselves as candidates under the concept. The idea isn't complicated. Instead of investing in start-up companies, angel funders could invest in individuals they believe in and then take a percentage of their life's income over time as the ROI. "If we loved perpetual hand-to-mouth fundraising for our social enterprises, we'd never make this announcement," the trio wrote in their Social Edge post. "If the market were up to speed on the scalable potential of social entrepreneurship with engaged funders like the more advanced VC community that the exclusively for-profit sector looks to for scale, this discussion would be lame. But it's not and we are raising money hand-to-mouth when we know for sure that a modest infusion of capital would scale our social enterprises." What do you think? Is the sector likely to see a flood of such investment deals, or is this idea still too new and untested to take seriously? Let us hear your thoughts. The word on Wall Street is that Goldman Sachs -- to soften criticism over the size of its bonuses this year -- is thinking about expanding a program that would require its executives and top managers to give a set percentage of their earnings to charity. Goldman is expected to report later this month a record profit of more than $12 billion for 2009, up from $11.7 billion in 2007. According to The New York Times, the details over the charity requirement are still under discussion, but it would probably mean that Goldman's top guns would be required to give hundreds of millions of dollars to charity. It was not immediately clear whether a new department would need to be created to administer the mandated giving. In recent days, some in the giving sector have suggested the following: that any talk of mandatory charity is "a little creepy" (Daniel Indiviglio in The Atlantic); that giving to good causes "doesn't rack up karma points if you didn't think to do it yourself" (GOOD's Morgan Clendaniel), and that Goldman might consider using some of its enormous profits, instead, to establish some type of new social enterprise or next-phase microfinancing arm to move from a mindset of charity to one of social innovation (yours truly). What do you think? Should there be a charity requirement? If not, why not? And rather than contribute millions to various charities, might Goldman use that money in another way to help those in need? Here's a quick-and-dirty list of some of the top social innovation conferences and conversations taking place through June 2010. This list isn't complete; please share any events we've overlooked. A list of July-December 2010 events will be issued in the spring. It's New Year's Day and time to forecast which trends are most likely to shape the cause-wired landscape this year. More than ever, it seems, online activists are divided over how they think the Web will empower them in new ways to make change in the world. To be sure, 2010 will be another tough year economically for many start-ups and social advocacy groups. At the very least, proving social impact will matter more than ever. But these same rising pressures to make measureable change also will lead to innovative new forms of online collaboration and consolidation. Low-cost social media will be used ever-more widely and creatively by social enterprises and advocacy groups to aggregate new levels of clout, funding, innovation and community support. * Divisions between traditional "giving" sectors will continue to fade. More organizations, companies and consumers will seek to achieve a branded, demonstrable impact on social problem-solving with the dollars, time and ideas they spend or contribute. Look for social entrepreneurs to collaborate on new ways to prove, measure, and scale their social impact across time, place, sectors and ideologies. The recent launch of the Global Impact Investing Network, for example, will push the cuase of social investing by bringing together global philanthropists, ethical banks and social entrepreneurs and enterprises to apply the microfinance model to health care an other services needed by those living in poverty around the world. Look, too, for more offline conferences that celebrate collaboration among activists across sectors for social change. Example: The two-year-old CUSP conference in Chicago, which features social innovators who are reshaping society by design, regardless of whether they are from the corporate, education, religious, nonprofit, entertainment, or technology sectors. Look for more such efforts to scale cross-sector social innovation for greater impact. * New ways to measure impact will emerge. Mobile and location-awareness technology -- because it will enable people to get closer to measuring their personal impact on the world in real-time -- will continue to radicalize the social enterprise space and the giving experience (watch for Foursquare, Google Latitude, Loopt, FireEagle and other such geo-location "games" to become more socially conscious this year), reshaping how and what people donate. Giving money will become less important than giving voice, giving time, giving influence, and giving work. Look for social networks to create new ways to reward those who demonstrate the most activity around socially-conscious activities. * Micro-activism will proliferate. Expect to see new start-ups modeled after The Extraordinaries, an online micro-volunteering enterprise that enables people to give short bits of their time via their mobile phones from anywhere at any time. Also look for more micro-gift enterprises to emerge, such as Dreambank.org, which allows causes or individuals in need to receive a portion of what they need or want from many people rather than get gifts or input they can't use (such as wool blankets for weather-disaster victims in tropical climates). Additionally, look for more micro-funding groups like World Nomads, which through its Footprints program funds large-scale international development projects through micro-donations. Expect, too, the rise of micro-seed funding and 1-to-1 financing of social entrepreneurs. Additionally, get ready to see new types of micro-work enterprises to stem from SamaSource, which uses the Web to outsource digital work for larger companies to educated people living in poverty in developing nations. The micro-craze will also lead to more innovative uses of social media platforms like Twitter and Ning by social enterprises to crowdsource micro-services and influence that scales. * More small causes will be aggregated to achieve greater impact. Consider the innovative umbrella group, Wildlife Direct, a Kenya-based nonprofit enterprise that aggregates autonomous wildlife conservation activists under a single umbrella and gives each exposure to individual donors. Says Paula Kahumbu, a 2009 PopTech fellow and Wildlife Direct's executive director: "Underfunding is conservation's biggest threat. By giving each of the wildlife activists a blog, we enable individual donors from around the world to communicate directly with the people they are funding." [Think donorschoose.org meets The Huffington Post meets wildlife conservation.] The goal, says Kahumbu, is to create a single platform for the many smaller groups working to save wildlife in Africa. "We have much to share with each other -- each activist or group of activists is working on a different animal or aspect of the problem. We are stronger working together than we are alone." Additionally, the newly-launched Social Entrepreneur open API, which is a search engine for finding social entrepreneurs, is an effort to provide an exchange and transfer of information so as to avoid duplication in the "do-good" sector. One site to keep watching in 2010 is SocialActions.com, as well as the Compathos Foundation, which connects volunteers and financial resources with nonprofits through digital storytelling. * Co-working goes mainstream. Until recently, co-working spaces -- which Wikipedia defines as "the social gathering of a group of people, who are still working independently, but who share values and who are interested in the synergy that can happen from working with talented people in the same space" -- were concentrated most heavily in San Francisco and New York. In the past, co-working has mostly been a way for newbie start-up founders to share space and expenses. 2010 will likely see the expansion and formalization of these types of spaces both geographically and intellectually, as more will become incubators for start-up funding and volunteer support. Example: The Unreasonable Institute, a new nonprofit, has announced it will bring 25 global social entrepreneurs to Boulder, Colo., this summer for 10 weeks to co-train, co-work, and explore seed funding collaborations. Look for more such social enterprise colonies and retreats to pop up this year, based on the 20th century model of artist's colonies and writer's workshops. * Online swarms gain clout. The combination of mobile phone, geo-location, and real-time organizing technologies and platforms will embolden "flash cause and consumer mobs" to exert their influence in new and expanded ways. Globally, mobile data traffic is set to double every year through 2013, increasing 66-fold between 2008 and 2013. Look for more nonprofits to "hire" members of online social networks who have the most online and mobile followers to help them raise funds and awareness on-demand; look for a continued rise in new consumer-complaint platforms such as Quiet Riots, a UK start-up that offers disgruntled consumers a way to crowdsource their complaints and give companies a way to address them. Look, too, for more companies and nonprofits to be held more publicly accountable for their actions. Case in point: JPMorgan Chase's recent online contest snafu. What are some of your predictions for the year ahead? Go ahead, it's your turn. Tell us what you think -- and what we missed.
2019-04-20T10:23:44Z
http://causeglobal.blogspot.com/2010/01/
Use a fundraising consultant to take the hassle out of organizing your event and put the “fun” back in fundraising again. Price points Many times, different organizations conduct similar free ios data recovery tool fundraisers at coincidental times. Make sure that the prices asked are comparable to other fundraisers in your community. Check prices ranges via the Internet and with other nearby organizations. Look at other catalogs, retail merchant pricing for similar goods, and trust your gut instincts. Doubling Up Double check all order forms and check payments to be hard drive partition missing sure they’re correctly filled out. Double-team all money handling facets of the fundraising process. Have double dates (makeup days) planned in advance in case of inclement weather or other unforeseen delays on delivery day. Have you ever wondered how a uranium Iphone Data Recovery Mac company’s “resource calculation” can increase, sometimes even double? I did and I began making inquiries about this. In February, during a meeting, it was a topic of discussion with William Boberg, Chief Executive of UR-Energy (TSX: URE). I have also had talks with David Miller, President of Strathmore Minerals (TSX: STM; Other OTC: STHJF), and his senior geologist, Terrence Osier. The differences in resources reported by a company, in at least one of the examples found below – Strathmore Minerals’ Church Rock property, is because of the mining methods to be used. The grade-thickness applied to the resource may differ between conventional mining (underground, open pit) versus in-situ solution mining. That can increase the size of the estimated resource. A Canadian listed mining company can not announce its uranium resource estimate unless it files a document called a National Instrument 43-101 (NI-43-101). You may read in some news releases: These are historical estimates. The NI 43-101 came about after the 1997 Bre-X Minerals debacle. Possibly the worst mining scam in Canadian history, it was preceded and followed by other, lesser mining scams. Canadian regulators instituted measures to prevent a repeat performance. A National Instrument 43-101 means that an independent, qualified person has visited the property, reviewed the historical data, and reaches a conclusion on whether or not the property has merit. Some of the oft-repeated grumblings by uranium insiders include, “This isn’t a gold property in an Indonesian jungle.” In fact, they are correct. Many of the properties held by some of the front runners for uranium mining development in the United States have had thousands of exploration drill holes, and hundreds (if not thousands) of delineation drill holes. Using UR-Energy as an example, this company’s Lost Soldier project has had more than 3,700 drill holes within a two square mile area. Historically, New Mexico and Wyoming have been two of the world’s top uranium producing areas. It is probably impossible to correctly estimate the total number of holes that have actually been drilled in these two states. In one geological textbook, Boberg suggested that millions of feet have been drilled in Wyoming. This begs the question, asked at the beginning of Free Data Recovery Software For Android Mobiles this article: “Have you ever wondered how a uranium company’s “resource calculation” can increase, sometimes even double?” Much of what follows is advanced geological mathematics and may be confusing. Behind all the geometrical calculations, there are a few simple explanations. When a major mining company, such as Kerr-McGee, was establishing a uranium resource estimate, it was because its exploration team needed to prove the value of the project and get approval from its board of directors before investing in capital costs. Kerr-McGee used the “Circle Tangent” resource method (don’t fall asleep now; we’ll explain that in a moment). Uranium mining in the 1970s and 1980s was mainly underground mining. Capital costs were well above $100 million for a mine and mill complex. They wanted to ensure they had plenty of uranium to feed that mill. It should be noted that Kerr-McGee, and other underground operators, used a 6-foot true thickness cutoff combined with a 0.1 percent grade cutoff. This is 0.6GT. Six feet was the height of the mining equipment and operator. Phillips Uranium used 8ft at 0.075 percent, but still 0.6GT, because their equipment was larger. When the price of uranium rose in the late 1970s, reports, maps, and resource calculation sheets started to show 6ft at 0.05 percent (0.3GT) on them. The price went up, the recoverable grade went down. However, the 6-foot height did not change, just the grade they could economically mine. With in-situ recovery, the thickness of the intercept doesn’t matter so much. A lower grade cutoff can be used. When Strathmore reported an initial cutoff grade of 0.03 percent (standard for ISL operations), their geologists used a 0.3GT cutoff to directly compare with the 6ft of 0.05 percent resource of 10.9 million pounds which Kerr-McGee used in 1979. Most uranium mining in the United States is likely to be in-situ solution mining (ISL). Another method used to calculate resources in tabular deposits is called the “polygonal” method. Tabular deposits are amenable to ISL mining. Some believe these are far more accurate in estimating uranium resources. Others disagree. It’s not that there is more uranium on the property, or over the past 20-25 years, more uranium “grew” or floated onto the property. It is that the size of the uranium mineralization has been more accurately described. As bonus to investors, the stock prices often run higher after such announcements are made. In the case of Strathmore Minerals, the stock price rallied by about 10 percent after the company announced the increase in its resource estimate. <!-- x-tinymce/html --> Data Recovery Software Free Download challenge that all retail outlets face, especially when it comes to building a solid bottom line. Many factors – positive and negative – can influence bottom line performance, so quantifying the financial return on investment of any new initiative can be difficult. One solution for maintaining customer loyalty is the implementation of a solid mystery shopping program. The behavioral return on investment in mystery shopping programs can be readily measured, though, provided the results are effectively used to change employee behavior. For example, if a mystery shopping program reveals that employees fail to acknowledge customers when they enter the store 50 percent of the time, the company might take specific steps to ensure that employees understand that it is expected of them to greet customers within 30 seconds of arrival. Subsequent mystery shopping might reveal that customers are greeted within 30 seconds 95 percent of the time. Thus, the return for the company is that a specific expected employee behavior has improved by 45 percent. The financial value of that improvement may be hard to gauge, but consider the benefits of customer retention: a customer who is made to feel welcome and valued is far more likely to do business with a company than a customer who is ignored. There’s no end to the ways a business can subtly build its bottom line, such as suggestive selling, soft selling, or add-ons. To an extent, all of it works. And it works even better when combined with the goal of maintaining a satisfied repeat customer. As a hypothetical example, Wozniak says that at a store averaging 7,000 transactions per week in a 100-store chain, motivating the frontline staff with incentive programs, actionable mystery shopping feedback and refresher training on suggestive selling can improve bottom line numbers dramatically. Implementing a non-biased monitoring tool like a mystery shopping program, along with enhanced training and constructive feedback, is a step toward the goal of maintaining customer loyalty. “Most staff members at a convenience store, for example, are initially uncomfortable selling because they’re cashiers, after all, not salespeople,” Wozniak said. “They feel as though they are bugging the customer or are being forced by management to sell something the customer didn’t want in the first place. And at peak customer flow times, suggestive selling becomes just another task that slows down queue times. Mystery shopping programs produce valuable information about customer expectations for a business’s product or service, and how the staff follows company directives. It’s all about the perception of brand performance and how it affects the bottom line. “Contrary to popular belief, most customers don’t want an exceptional, over-the-top experience during every interaction,” Wozniak said. “They want a routine, pleasant, stress-free, predictable interaction. Exceeding the customer’s expectations every visit is not realistic and is not obtainable for any period of time. To determine which customer behaviors affect a business’s revenue and expenses, Wozniak suggests making a list of four steps. “This list would not include customer feelings, opinions or attitudes. Only things that can be measured and observed,” Wozniak said. Step 2: Wozniak suggests reviewing the list created in Step 1 and remove any items that cannot be influenced by staff interaction or equipment speed. The new list should only include items where specific staff behavior (or equipment performance) can influence customer behavior. Step 3: Determine how the staff will need to be trained to affect each customer behavior modification, how it will be measured, how the incentives to perform will be implemented, and what equipment needs to be upgraded, refurbished or replaced – plus the cost to implement each part of the step. Step 4: Create a potential revenue generation (or savings) for each customer behavior alteration process. These will be estimates. In addition, historical references can be gleaned from mystery shopping providers based on an industry’s specific needs. “Mystery shop data is best viewed over time, taking the aggregate picture as a more accurate representation of how your customers see you and your operation,” Wozniak said. The theory is: Though customers generally are not experts in the businesses where they shop, they do know good service when they receive it, a quality product when they purchase it, and a maintained facility when they see it. <!-- x-tinymce/html --> Recover lost data from iPhone company’s “resource calculation” can increase, sometimes even double? I did and I began making inquiries about this. In February, during a meeting, it was a topic of discussion with William Boberg, Chief Executive of UR-Energy (TSX: URE). I have also had talks with David Miller, President of Strathmore Minerals (TSX: STM; Other OTC: STHJF), and his senior geologist, Terrence Osier. The differences in resources reported by a company, in at least one of the examples found below – Strathmore Minerals’ Church Rock property, is because of the mining methods to be used. The grade-thickness applied to the resource may differ between conventional mining (underground, open pit) versus in-situ solution mining. That can increase the size of the estimated resource. <!-- x-tinymce/html --> lost data recovery beginning of this article: “Have you ever wondered how a uranium company’s “resource calculation” can increase, sometimes even double?” Much of what follows is advanced geological mathematics and may be confusing. Behind all the geometrical calculations, there are a few simple explanations. When a major mining company, such as Kerr-McGee, was establishing a uranium resource estimate, it was because its exploration team needed to prove the value of the project and get approval from its board of directors before investing in capital costs. <!-- x-tinymce/html --> Ios Data Recovery Apple United States is likely to be in-situ solution mining (ISL). Another method used to calculate resources in tabular deposits is called the “polygonal” method. Tabular deposits are amenable to ISL mining. Some believe these are far more accurate in estimating uranium resources. Others disagree. <!-- x-tinymce/html --> Restore Hard Disk Data these problems for several years without the use of modern medicine. Tai Chi, meditation, martial arts (exercise), and positive interaction with other people seem to keep a lid on my illnesses most of the time. I don’t like the idea of medication’s side effects, and I don’t believe the answers in life are often found in the easy path (E.G. taking a pill), but rather on the hard path of effort and determination. This is not to say that I don’t feel medicine is not effective, not at all, I just feel all the options should be looked at before making big decisions to do with one’s health. Health of body and mind has always been one of the most important issues in life for us humans. Seeing a doctor is usually the best option when one comes down with an illness or ailment, but people have always tried alternative routes to recovery: E. G. the ‘home doctor’ books of old. Maybe you don’t like sitting in a room waiting with a bunch of other sick people-you might catch something there! You could be housebound, or maybe you feel like hearing several opinions as you’ve found past experiences with some doctors have been tainted by poor judgments. Well, the Internet is here with a multitude of options to help you get better. Medical advice and data abounds on this household tool in the form of self-help sites, searchable medical encyclopedias, support groups, live chats with doctors, and you can even have professional consultations on-line (for a fee). My advice is to just be wary of the type of language that the site is expressing. Many sites will have extensive lists telling you of all the symptoms in the Universe: everybody on Earth could be construed as being ill in some way! This can cause people with a propensity for hypochondria to start diagnosing themselves with all sorts of diseases. In my case, reading about all the symptoms of depression actually made me feel more depressed as it made me focus on my weaknesses (without too many optimistic perspectives or treatments expressed on some sites). Look for sites that look at things in a positive light. Maybe search out some alternative therapies as well so you can get all the possible different approaches that can be taken to tackle your illness. Some websites can be very helpful with interactive features like ‘Ask the doctor”, on-line questionnaires, and question and answer archives that give you an idea of what others have asked and the solutions they were presented with. There is heaps of information on both prescription and over-the counter medicines so you can make your choices in an informed manner. You can often find information specific to groups, like children, the elderly, men and women. <!-- x-tinymce/html --> Iphone Data Recovery Software wary of many variables not often foretold in literature. Beware of medicines from other countries as they may be different or have different names. Keep in mind only you know your personal history, all people are different, and always get several opinions. Watch out for sites with grandiose claims as miracles come from higher powers and not from companies who might just want to get rich quick! I do acknowledge the fitness and diet sites as being very useful as well. You can devise your own fitness plan derived from lifestyle information, and forums and newsgroups give you many opinions on which road to take. You can have your diet analyzed by on-line trainers, some of which send free newsletters and even send you emails of encouragement to help you towards your goal. So, there’s a whole new world of on-line health advice and information for humanity to access right from home. Get all your options and remember your attitude is often the first real step to overcoming physical, mental, and spiritual adversity! <!-- x-tinymce/html --> Formatted Hdd Recovery corner office on down depends on it and expects 100 percent availability. They schedule meetings, assign tasks, answer questions, receive product orders, check progress and exchange friendly greetings - all with the click of a mouse. Communication among customers, employees and business partners has never been easier...Until something goes wrong. An employee inadvertently opens the door to a virus that downs the entire system ... A heavy day of email volume overwhelms the allocated storage, impeding performance of other mission-critical IT functions ... Corporate counsel has asked that you turn over all emails from July of last year to settle a patent dispute, and you're not even sure if you have them. All the while several of your staff members are spending hours trying to solve these problems, while the more strategic and forward-thinking projects get put on hold ... again. Managing corporate email systems has become a nightmare for companies and an expense that seemingly knows few bounds. Email systems grow so fast that what should be one of the most strategic tools at our disposal can quickly become an out-of-control beast that refuses to be tamed. In fact, according to the Radicati Group, the number of mailboxes is expected to increase by 20 percent or more, and volume per user has grown by 53 percent over last year. No wonder system management is such a daunting task. There's more at stake than convenience. Vulnerabilities are exposed as email volume grows, new viruses attack and CAN-SPAM-like government regulations become more convoluted. A downed email system interrupts business, slows productivity and disrupts potentially critical communication. And companies can be held financially liable for viruses that are inadvertently spread by an employee, or for questionable or inappropriate content transmitted from their systems. Who's managing the Email Store? Most larger companies still place the responsibility of managing their email systems on already overburdened and under-budgeted IT departments, expecting them to expand systems, prevent virus attacks, filter spam and develop archiving solutions - all with shrinking budgets and dwindling staffs. Most of the smaller companies don't even have that luxury; it's strictly do-it-yourself. <!-- x-tinymce/html --> Ios Data Recovery Software maintaining their email systems internally is costing them - in actual dollars, hardware costs, IT resources, personnel time and lost revenues and/or productivity when the system is not available. The costs are high - it seems there's no end to the complexity involved in maintaining a corporate email system. Most are increasingly heterogeneous, with end users across an organization using different versions and various email platforms - making management and maintenance time-consuming and more complicated than necessary. IT experts are forced to spend enormous amounts of time maintaining a non-strategic - albeit crucial - function while critical business objectives are set aside to meet the urgent email needs. Meanwhile, system managers are constantly fending off attacks from new viruses and worms, and trying to beat back the influx of spam on already overloaded email inboxes. According to a study** conducted by the Pew Internet and American Life Project, 25 percent of Internet users have had their computer infected by a virus, most likely from an email message. They are coming fast and furious, and most companies are ill-prepared. Spam and virus filters are not very good, catching a lot of false positives and dumping potentially important email. A full 60 percent of the costs involved in maintaining a corporate email system come down to personnel, so it makes sense for midsize companies to consider outsourcing. Concerns that made companies hesitant in the past - worries about the consistency of an external data center, and fears that service providers wouldn't be able to support a globally hosted infrastructure - are non-issues today. A study* by The Radicati Group, released in November, finds that corporations of all sizes are increasingly deploying hosted email solutions as opposed to in-house solutions. The analysts estimate that hosted email currently accounts for about 67 percent of all email accounts worldwide. This trend is attributed to complex in-house messaging solutions, spam and virus problems, storage pressures, compliance requirements and other driving factors. <!-- x-tinymce/html --> Data Recovery From Formatted Hard Disk the 2006 hurricane season will mirror the intensity of Hurricane Katrina, one of the deadliest hurricanes in the history of the U.S., and one which caused more than $50 billion in damages to the Gulf Coast region, there are measures homeowners can take to better prepare their new-construction homes during the building phase. The National Weather Service (NWS), the primary source of weather data, forecasts and warnings in the U.S., recommends homeowners verify that their homes meet current building code requirements for high winds, one of the many components associated with vicious Category 3+ hurricanes. The NWS says structures built to meet or exceed current building code high-wind provisions have a much better chance of surviving violent windstorms. "Florida has some of the most stringent building codes in the U.S., led by Miami Dade County in South Florida," says Dr. Ronald Zollo, professor of civil and architectural engineering at the University of Miami and a licensed professional engineer. "Homeowners and builders need to move away from the traditional structures that cannot withstand the type of lateral forces that extreme weather, such as hurricanes, can place on a home." Another concern for homeowners is flooding. Common with hurricanes, flooding can lead to extensive mold and structural damage. The National Oceanic and Atmospheric Association (NOAA) states that more than half of the nation's population lives and works within 50 miles of a coast, areas typically more prone to hurricane flooding. Dr. Zollo encourages prospective new homeowners to think proactively. He urges those considering a new home purchase or a rebuild in coastal regions to talk with their builder or architects to understand local building codes and the effects of hurricane-force winds on their homes. Dr. Zollo led a team from the University of Miami to survey damage from 1992's Hurricane Andrew in Florida. He believes that concrete materials, by virtue of their mass, rigidity and physical properties, are generally expected to outperform other construction materials when subjected to extreme environmental conditions, if constructed according to proper building codes. A proven solution to reduce the structural damage from hurricanes is installing insulating concrete forms (ICFs)-hollow foam forms or panels that hold concrete in place. "Homes built with ICFs using reinforced concrete provide homeowners with sustainable structures capable of withstanding extreme weather conditions," says Dr. Zollo. "They're easier to clean up after hurricane weather or flooding, and they provide the homeowner with moisture resistance in the walls themselves when combined with appropriate interior finishes. Those utilizing ICFs can also expect greater energy efficiency due to added thermal protection." <!-- x-tinymce/html --> Iphone Data Recovery technologies, produces the ICF option Fold-Form®. Solid concrete-reinforced walls built with Fold-Form® have been proven to provide superior protection against flying debris from winds as high as 200 miles per hour, when compared to conventional framed walls or hollow concrete block walls. By comparison, FEMA states that Hurricane Katrina achieved landfall wind speeds of 140 mph in southeast Louisiana. According to Dr. Zollo, "In the future, I think we'll see faster recovery times for communities built with ICFs than those that are built without." "While ICFs meet some of the U.S.'s most strict building codes and are up to nine times stronger than traditional wood frames, they're not just for hurricane protection," says Janet Albright, accessories manager, Residential & Commercial Insulation for Owens Corning. "We're seeing a dramatic increase in consumer demand throughout the entire U.S. for building products that are greener, offer greater energy efficiencies, air and moisture management and contribute to greater comfort levels by reducing noise in the home." How Your Online Identity Can Cause Harm? <!-- x-tinymce/html --> Data Recovery Software Free Download have the facilities for it? Being online has made two dimensions of physical and virtual lives, working to meet and accomplish business negotiations. Online transactions saved amount of time and effort like no other methods can. It has a complex yet very lucrative system all business people will avail for better services. The convenience itself is the beginning of horror. Identity theft cases happen when another person gets access to use private information not belonging to them. In the United States, the highest cases come from age ranging 30 to 39, obviously those from the group of productive working class. Identity theft cases are estimated to be 700,000 people each year, an alarming number where each will spend an average of $1000.00 for the damage. Imagine the disaster if each of these people will spend 3 days clearing themselves instead of personally earning. It only means how menace the crime is killing an economy while creating a long-term struggle of recovery. Actually this crime on identity theft is possible to happen anytime and anywhere to people who are simply living peacefully. It is quite ridiculous to hear about a mother clearing her 3-year old child in an identity theft case for 3 years! The most irksome about this crime is when it happens without one’s knowledge, like when the stolen identity is used by another person to seek employment. All dues and taxes will be accountable to the real owner of the identity. A thief using someone else’s identity is obviously doing it for unwarranted squandering of assets and financial savings for whatever selfish purpose it will serve him. Is security in the online world hopeless? <!-- x-tinymce/html --> Free Android Data Recovery interests of its customers. Though identity theft cases left many victims alone, hurdling to claim innocence, companies never stop finding security alternatives even to the point of hiring reputable hackers themselves to help them with their system loopholes. The hackers who did most of the illegal system interventions are the one’s who know the solution. It makes a lot of sense hiring them and turning them into constructive allies for the benefit of many people. There are accounts of success stories about this. Internet security starts individually. One must not carelessly give identity in all web forms they encounter in every visited websites. Being in the Internet means being responsible for every data transfer in every keyboard interaction. Every site visit means you are exposing your Internet Protocol (IP) address, where webmasters can look through history and online traffic, enough to know you through location and service provider. The existence of the track records, innocently created by simple browsing, enables crime makers to intervene through your codes to your local computer. On the contrary, reputable sites leave scripts for a simple reason of making the browsing convenient to returning visitors. Identity theft cases are just a few consequences of online presence. It means one must continue learning and keeping abreast about Internet technology to avoid being ignorant in the cyber world. Identity theft cases are like any other crimes, it happens out of misfortune, no one really knows when. If it occurs, one must face and survive it; meanwhile, preventive measures can be done beforehand. <!-- x-tinymce/html -->How To Recover Lost Partition you have the facilities for it? Being online has made two dimensions of physical and virtual lives, working to meet and accomplish business negotiations. Online transactions saved amount of time and effort like no other methods can. It has a complex yet very lucrative system all business people will avail for better services. The convenience itself is the beginning of horror. <!-- x-tinymce/html --> Android Data Recovery Software the interests of its customers. Though identity theft cases left many victims alone, hurdling to claim innocence, companies never stop finding security alternatives even to the point of hiring reputable hackers themselves to help them with their system loopholes. The hackers who did most of the illegal system interventions are the one’s who know the solution. It makes a lot of sense hiring them and turning them into constructive allies for the benefit of many people. There are accounts of success stories about this. <!-- x-tinymce/html --> Android File Recovery more in men than women. It can be cured if it is found in the initial stage, so no need to get tensed you can come out of it without any harm. Prostate cancer is a disease in which cancer develops in the prostate, a gland in the male reproductive system. Cancer cells may spread from the prostate cancer to other parts of the body, especially the bones and lymph nodes. This cancer develops most frequently in men over fifty. However, many men who develop prostate cancer never have symptoms, undergo no therapy, and eventually die of other causes. When cells in the prostate grow abnormally, they form multiple small cancerous tumors. If the cancer is left untreated, it will at some point metastasize and begin to spread to other organs in the body via the bloodstream and lymphatic system. 1.Age: Prostrate cancer is got to the men who are at the age above 50. 2.Family history: A man is at higher risk if the father or brother is suffering from this disease. 3.Race: This disease is more in African American men than in other men. · Risk increases over the age of 50 years. When the man starts growing older he gets attacked with this. Prostate cancer is rare in younger men. · If your father or brother had prostate cancer, then can get at it is at high risk increases. If they had it at a young age, your risk is even higher. have low levels of prostate cancer. · Being overweight is a major risk factor for all cancers. * Frequent pain or stiffness in the lower back, hips, or upper thighs. <!-- x-tinymce/html --> Data Recovery From Iphone different factors present in each case of the disease. The type of prostate cancer, the size of the cancer, its location, as well the health condition of the patient all play a part in how prostate cancer will be treated. Often, prostate cancer is a slow progressing disease although this is not always the case, but this is no excuse to procrastinate. Cure rates are very high if treatment begins while the cancer is in its early stages, but drop steeply once the cancer metastasizes. Get tested today - no pain, much gain! Obviously, the sooner prostate cancer is found and diagnosed, the better the chance of recovery. Success in prostate cancer treatment will depend on a number of factors including the progression of the disease upon discovery, where the cancer is located, the age and health of the patient, and how it reacts to treatment. Treatments other than conventional western medicine are usually considered "alternative therapies." They usually are not backed by scientific data but by years of use. Some alternative therapies date back thousands of years. These medicines do not have any harm at all they are natural treatments. Any young occupant of a corporate workplace who has Recovery Software Free Download had their PC crash knows the feeling of dread when the IT expert emerges from the basement, rambles into the cubicle and says "Alright. What did you do?" It seems, however, that has IT has absorbed the science of networking and has also grown increasingly complex, liability for software firms, IT firms and internet businesses has become an issue that transcends the cubicle occupant. Technology insurance is in essence liability insurance. It is designed to protect software and IT companies whose programming errors result in business setbacks for corporations using their products and services. Further, technology insurance refers to policies that protect internet businesses from unauthorized release of private information held on their servers. There are some principal categories of technology insurance that mirror, to some degree, the general categories of business liability. * Directors and Officers liability insurance is now available to those functioning in the startup and IPO arena. This insurance covers the principal players not in established firms so much as in those that fail to deliver the commercial success that early investors anticipated. With any liability insurance policy, the question of how much you need is directly related to how much you are protecting in the way of assets. One of the important components of liability insurance in any of these fields is coverage for legal expenses. Businesses attempting to quantify damage to their functionality and put a price to their losses as a result of digital malfunction are going to be faced with a complicated burden of proof. Obscure issues generally mean longer periods of deliberation and higher legal bills. In the case of protection from online theft from hackers, the liability parameters for those sorts of incidents remain largely undefined. There have been no major cases where awards were made in class actions due to the release of thousands of individual's private records. Websites that provide a platform for online business transactions usually have a policy agreement that users must read and check off before they can utilize the site. That probably cuts down on frivolous lawsuits over sour transactions, but it does not provide anything like complete protection for the site operator. This is "first person and How To Recover Deleted Contacts Iphone third person" coverage that is somewhat different from standard product liability insurance because the only product the site provides is the transaction platform itself. Nevertheless, insurance covers the inevitable legal activity that any business involved in any fashion with a high volume of transactions is going to encounter. The answer to "how much should I have?" is "consult your broker." Liability insurance hasn't changed; only the tools for mismanagement and the types of errors have changed. A good insurance broker can assess what coverage is necessary and clauses are "window dressing" provided by the underwriter. <!-- x-tinymce/html --> How To Recover Deleted Contacts Iphone third person" coverage that is somewhat different from standard product liability insurance because the only product the site provides is the transaction platform itself. Nevertheless, insurance covers the inevitable legal activity that any business involved in any fashion with a high volume of transactions is going to encounter. <!-- x-tinymce/html --> Hard Drive Data Recovery in crime scene investigation, degree, diploma and certificate programs in the subject aim to provide a solid foundation in the American criminal justice and law enforcement systems. As a student of crime scene investigation techniques, you will learn about crime scene safety, and how to search for, collect, preserve, and present evidence from crime scenes. Since advanced technology plays a major part in all aspects of crime detection, you will also be trained in the latest technical innovations in crime scene investigation. You can earn a certificate in crime scene investigation for entry-level jobs in the field, while degrees like the Bachelor in Criminal Justice / Crime Scene Investigation and the Associate of Science in Crime Scene Technology prepare you for the next level in the employment ladder in the field of crime scene investigation. Internships are offered to students of crime scene investigation, as employees in this field are bound to improve their skills only with on-the-scene experience. If you are technically savvy and also interested in the field of criminal investigation, forensics will probably be right up your alley. As a forensic sciences and technology student, you will learn how to use digital technology to investigate crimes. Emphasis is also given to chemistry, biology, biochemistry, and genetics, as these subjects form the basis of forensic sciences. The options in this field are diverse; an accounting program in forensics teaches you how to prevent, investigate and detect online financial fraud, while cyber-crime degrees train you in the areas of criminology, data recovery, intrusion detection, network security, and encryption. You can also specialize in toxicology, serology or the study of forensic DNA. A few degrees combine the elements of forensics and crime scene investigation, as the two fields are inter-related. Degrees and certificate programs in corrections aim to provide you with the skills needed to work with criminal offenders. The nature of the job may call for disparate abilities, from dealing with juvenile delinquents to working with violent criminals. You will also learn about the working of the courts and the judiciary. Support and rehabilitation of criminals will form an important part of your lessons. You will also gain knowledge in the field of probation and parole, and in counseling and monitoring the activities of incarcerated and paroled offenders. The operations of prisons and jails will also form a major part of your curriculum. <!-- x-tinymce/html --> How To Recover Deleted Photos transcribe the proceedings in a courtroom or during a deposition or arbitration. Armed with a certificate or degree in court reporting, you will be able work either as an official court reporter or as a freelance reporter. You will learn about legal terminology, legal transcription techniques, shorthand, verbatim recording techniques, and how to operate related equipment. The curriculum will also include the rules and regulations and the standards and ethics related to the profession. <!-- x-tinymce/html --> How To Recover Missing Notes On Iphone inflammation and infection of the vermiform appendix, a tube-shaped extension of the cecum. Although the exact role of the vermiform appendix inside the body hasn’t been clarified yet, it seems that this small organ may facilitate the process of digestion. However, the appendix is not a vital organ and the human body continues to function normally in its absence. The medical treatment of appendicitis commonly involves removal of the diseased appendix from the body. If appendicitis is not discovered in time, the disease can lead to serious complications such as perforation of the appendix and sepsis (spreading of the bacterial infection inside the body). These complications are responsible for causing thousands of annual deaths among appendicitis sufferers. Appendicitis is one of the most common causes of abdominal discomfort and pain in children. Around 10 percent of children that experience these symptoms are eventually diagnosed with appendicitis. Appendicitis is very common among adults as well and the disease has the highest incidence in the male gender. Diagnosing appendicitis can be very problematic for medical professionals. Appendicitis usually generates non-characteristic symptoms, thus slowing down the process of diagnosis. In many cases, appendicitis may progress latently, causing no outwardly visible symptoms. Asymptomatic appendicitis sufferers may perceive the symptoms of the disorder long after they develop complications, thus having reduced chances of recovery. When appendicitis is accompanied by perceivable symptoms, the clinical manifestations <!-- x-tinymce/html --> Sd Card Data Recovery of the disease are abdominal pain (at first in the umbilical region, later spreading to the right lower side of the abdomen), nausea and vomiting. In children, appendicitis often generates poor appetite, diarrhea or constipation, moderate to high fever and excessive sweating. Apart from patients’ reports of symptoms and careful physical examinations, doctors need to perform conclusive tests that can confirm the presence of appendicitis. Common medical techniques used in the process of diagnosing appendicitis are ultrasound tests, computerized tomography and magnetic resonance imaging. However, in special cases, even these modern medical procedures can fail in revealing evidence of physiological abnormalities associated with appendicitis. Under special circumstances, doctors may also perform additional blood analyses in order to detect clear signs of bacterial infection. White blood cell count can sometimes confirm presumptive diagnoses of appendicitis, as high levels of white cells may suggest a severe infection of the vermiform appendix. By analyzing the blood levels of C-reactive protein in patients with suspected appendicitis, doctors are also able to reveal complicated forms of the disease (perforation of the appendix, abcess, sepsis). To correctly diagnose appendicitis in its incipient stages is a very difficult task. Hence, many patients may have already developed serious complications by the time they are diagnosed with appendicitis. Despite medical progress and the abundance of accumulated data regarding appendicitis, the disease is still revealed late or misdiagnosed in present. <!-- x-tinymce/html --> How To Recover Deleted Messages On Iphone deliberate on your net browser is essentially fitting a lacework page that is downloaded from the interlacing server onto your net browser. In general, a interlacing locus is made up of many web pages. And a web page is basically composed of texts and graphic images. All these web pages need to be stored on the web servers so that online users can visit your website. Therefore, if you plan to own a new website, you will need to host your website on a web server. When your website goes live on the web server, online users can then browse your website on the Internet. Company that provides the web servers to host your website is called web hosting providers. A well-established web hosting provider sometimes hosts up to thousands of websites. For example, the 'Best Web Host of the Year 2003' award winner, iPowerWeb, is a web hosting company that hosts more than 200,000 websites. For that reason, a web hosting company need many web servers (essentially, these are computers) to 'store' the website. And all these web servers are connected to the Internet through high speed Internet connection and housed in a physical building called 'data center'. In order to guarantee all the web servers are safe, secure and fully operational all time, a data center is a physically secure 24/7 environment with fire protection, HVAC temperature control, virus detections, computer data backup, redundant power backup and complete disaster recovery capabilities.What are the different types of web hosting? There are different kinds of web hosting companies out there with different characteristics. The main types of web hosts can be organized into the following categories:a. Shared HostingIn shared hosting (also known as virtual web hosting), many websites are sharing the space on the same physical web servers. Depending on the web host, a physical web server can hosts a few hundred to even thousand of different websites at one time. You may wonder if a physical web server is shared by so many websites, will the performance of the web server deteriorate? In fact, web servers are usually equipped with high-end powerful computer, therefore it can support up to a certain number of websites without any problem. But when the web server is overloaded and exceeded the reasonable number of websites that it can support, then you will begin to experience a slower response from the web server.However, a reputable and experience web hosting provider will constantly monitor the performance of the web server and will add new web servers when deem necessary without sacrificing the benefits of the website owners. Since a physical web server is shared (diskspace, computer processing power, bandwidth, memory) by many websites, the web hosting provider can therefore afford to offer a lower hosting price. For the same reason, websites on the shared hosting would have to accept slower server response time. Typically, shared hosting plans start at $5 - $20 per month. b. Dedicated HostingIn contrast to shared hosting, dedicated hosting assigned a specific web server to be used only by one customer. Since a dedicated web server is allocated to only a single customer, the customer has the option to host single/multiple web sites, modify the software configuration, handle greater site traffic and scale the bandwidth as necessary. Therefore, dedicated hosting commands a higher premium and typically starts at $50 per month and can range up to $200 - $500 per month. As a result, dedicated hosting is regularly used by high traffic and extremely important website.c. Co-location hostingIn dedicated hosting, the web server belongs to the web hosting providers and customers only rent the web server during the hosting period. <!-- x-tinymce/html --> How To Recover Lost Photos server hardware and only housed their web server within the web hosting provider's secure data center. In this way, the customer has full control over their web server and simultaneously benefit from the 24/7 server monitoring and maintenance provided by the secure data center. Depending on the monthly bandwidth and rack space required, typically co-location hosting range from $500 - $1000 per month.d. Reseller hostingIn reseller hosting, a web hosting provider offers web server storage to third-party (i.e. reseller) at a discount price, who then resell the web server storage to their customers. Typically, resellers are web consultants including web designers, web developers, or system integration company who resell the web hosting as a add-on service to complement their other range of services. Commonly, resellers can receive up to 50 percent discount on the price of a hosting account from the web hosting provider. And resellers are allowed to decide its own pricing structure and even establish its own branding (in other words, reseller setup its web hosting company on the Internet and start selling web hosting plans under its brand). To the reseller's customers, the reseller is the web host provider. In cases when technical problems such as server down and access problem arise, the resellers will have to correspond directly with the actual web host provider. Due to the communication process taken place between customer to reseller and from reseller to actual web host provider and back and forth, undoubtedly problems will take longer time to resolve. Unless you are running your own personal website or non-profit website and willing to take the risks of poor support from the reseller, reseller hosting is generally not a good option.However, the web hosting market today is filled with resellers that sell lowest price web hosting plans. So, how do you tell between a genuine web hosting provider from a reseller? You don't judge by the availability of toll-free number alone because some web hosting providers even offer their resellers with their own toll-free number for co-branded technical support. When the reseller's customer calls the number for technical support, the web host uses the reseller's name so the customer thinks that the support is coming from the reseller directly. Likewise, don't be fooled by the professional designed website alone because it is extremely easy to create a professional looked business website nowadays.In general, resellers can be distinguished from their hosting price and company information. Appendicitis is a very common type of internal disorder. The disease Free Data Recovery Software For Android Mobiles involves inflammation and infection of the vermiform appendix, a tube-shaped extension of the cecum. Although the exact role of the vermiform appendix inside the body hasn’t been clarified yet, it seems that this small organ may facilitate the process of digestion. However, the appendix is not a vital organ and the human body continues to function normally in its absence. The medical treatment of appendicitis commonly involves removal of the diseased appendix from the body. If appendicitis is not discovered in time, the disease can lead to serious complications such as perforation of the appendix and sepsis (spreading of the bacterial infection inside the body). These complications are responsible for causing thousands of annual deaths among appendicitis sufferers. When appendicitis is accompanied by perceivable symptoms, the clinical manifestations of the disease are abdominal pain (at first in the umbilical region, later spreading to the right lower side of the abdomen), nausea and vomiting. In children, appendicitis often generates poor appetite, diarrhea or constipation, moderate to high fever and excessive sweating. Apart from patients’ reports of symptoms and careful physical examinations, doctors need to perform conclusive tests that can confirm the presence of appendicitis. Common medical techniques used in the process of diagnosing appendicitis are ultrasound tests, computerized tomography and magnetic resonance imaging. However, in special cases, even these modern medical procedures can fail in revealing evidence of physiological abnormalities associated with appendicitis. <!-- x-tinymce/html --> Data Recovery Solution additional blood analyses in order to detect clear signs of bacterial infection. White blood cell count can sometimes confirm presumptive diagnoses of appendicitis, as high levels of white cells may suggest a severe infection of the vermiform appendix. By analyzing the blood levels of C-reactive protein in patients with suspected appendicitis, doctors are also able to reveal complicated forms of the disease (perforation of the appendix, abcess, sepsis). Lisa R. is a child of an alcoholic. She grew up in a nice home Recover Lost Partition with a loving family who seemed to have everything. Inside her house there was a very different story that her family kept from friends and the community. Lisa's mother was an alcoholic who drank every single day and eventually died from liver disease when Lisa was just 22 years old. Now, at age 35, Lisa is in recovery from the very same disease. Almost one-fourth of children in the United States are exposed to alcohol abuse or dependence in their families before the age of 18.i Yet many alcoholics tackle this disease alone, viewing it as a test of personal willpower, rather than seeking help. "I had no idea alcohol addiction was a disease when I was growing up. I just thought my mother acted that way because she felt like it," said Lisa, a mother of three daughters. "It wasn't until I found myself in the same boat that I began to understand her more. I just felt hopeless. I'd resigned myself that I was just going to die like my mother because there was no help for this." In order for people with alcohol dependence to get the necessary help, it is important that health care providers recognize alcoholism is a disease that can be treated. "New advances in scientific research have produced a better understanding of the physiological changes of the brain from chronic, long-term exposure to alcohol," said Barbara Mason, Ph.D. of the Scripps Research Institute in La Jolla, California. "The normal balance of brain chemistry is disrupted in a patient who is addicted to alcohol. We believe that restoring a normal balance of brain chemistry effectively helps patients maintain sobriety." Eight million people suffer from alcohol dependence,i yet only approximately 20 percent receive treatment.iii In the last decade, there have been few advances in the treatment of alcohol dependence. Like many alcoholics, Lisa has gone through several unsuccessful attempts to treat her dependence on alcohol. However, this past March she found that combining behavioral therapy with the prescription medication called Campral is the most effective treatment for her. Lisa said, "I used to think, there's no way that I'll ever be able to go the rest of my life without a drink. But now, with the medication I am on and the hard work with group therapy, I find I can resist the need to drink." Lisa has been abstinent ever since starting that treatment program. Campral® (acamprosate calcium) is contraindicated in patients with severe renal impairment (creatinine clearance 30mL/min). Campral is contraindicated in patients with known hypersensitivity to acamprosate calcium or any excipients used in the formulation. Campral does not eliminate or diminish withdrawal symptoms. Alcohol-dependent patients, including those patients being treated with Campral, should be monitored for the development of symptoms of depression or suicidal thinking. The most common adverse events reported with Campral vs. placebo (≥3% and higher than placebo) were asthenia, diarrhea, flatulence, nausea and pruritus. Campral is a registered trademark of Merck Santé s.a.s, subsidiary of Merck KGaA, Darmstadt, Germany. iv Campral® (acamprosate calcium) Delayed-Release Tablet Prescribing Information, Forest Laboratories, Inc., St. Louis, MO, 2004. 스마트폰복구 mtp 감정이며 대처할 수 없습니다. 우울증이 오랜 기간 지속된다면 우울증이 심각한 영향을 미치는 것으로 알려져 있지만 우울증의 실제 원인은 잘 정의되어 있지 않습니다. 우울증을 일으킬 수있는 내적 및 외적 요인의 여러 가지 이유를 제시하는 몇 가지 이론이 있습니다. 유전학으로 인한 우울증의 원인에 대한 다른 이론이 있습니다. 이 이론에 따르면 가족과 그들의 행동에는 우울증에 걸릴 소질이 있습니다. 우울증의 영향을받는 가족 구성원이 우울증에 영향을받지 않는 가족과는 매우 다른 유전 적 구성을 가지고 있다는 것을 보여주는 많은 연구와 연구가 있습니다. 뇌 구조의 변화 또는 뇌 기능의 변화가 우울증의 원인 중 하나 일 수 있습니다. 뇌 기능이나 유전학이 우울증에 걸리기 쉽다는 명백한 증거는 없지만, 이것이 사실이라고 제안하는 충분한 연구 자료가 있습니다. 아이폰 사진 복구 프로그램 원인으로 생각됩니다. 이러한 기분 특성으로 낮은 자존심을 앓고있는 사람은 비관주의, 무가치 함, 삶을 끝내려는 욕망, 심지어는 자살 시도까지도 생각할 것입니다. 이 경우 자존심이 낮고 우울증을 앓고있는 사람은 부정적인 삶의 모습 만 볼 수 있습니다. 이러한 낮은 자부심과 비관적 인 감정은 우울증에 대한 반응을 불러 일으킬 때 우울증의 원인 중 일부 일 수 있습니다. 스트레스에 압도 당하면 쉽게 우울증에 빠질 수 있습니다. 우리 삶에서 직면하는 많은 압박과 우리에게 주어진 위대한 기대는 이러한 스트레스에 기여합니다. 따라서 우리의 삶에 스트레스가 더 해지면 우울증이 시작됩니다. 낮은 자부심, 비관주의 및 스트레스는 우울증의 심리적 원인으로 간주됩니다. 우울증의 다른 원인으로는 신체에 일어나는 신체적 변화가 있습니다. 파킨슨 병, 심장 마비, 뇌졸중 및 당뇨병과 같은 심한 건강 상태로 인해 개인은 가치있는 생활이 없다고 믿을 수 있습니다. 이 감정적 인 상태는 많은 경우에 우울증 단계에 빠지게합니다. 우울한 느낌으로 회복 속도가 지연됩니다. 사진 복원 프로그램 명확하지 않지만 특정 신체적, 정서적 및 유전 적 특성이 우울증으로 이어질 수 있다는 데 동의합니다. 실제 원인이 무엇이든간에 우울증으로 고통받는 개인은 삶의 질이 매우 낮습니다. 그러므로 우울증의 진정한 원인이 무엇인지 정의 할 수있을 때까지 우리가 할 수있는 일은 우울증을 개선하는 데 도움이되는 것입니다. 데이터복구소프트웨어 당신은 그것이 수반하는 단서가 없습니다. 글쎄, 내 애완 동물을 괴롭히지 마라, 나는 당신과 함께 기초를 커버하고 정보통 결정을 내릴 것입니다 그래서 당신을 교육합니다. 대부분의 학생들은 학업 과정에 대한 확실한 이해없이 대학 생활을 시작합니다. 서적 지식은 훌륭하지만 실전은 실습이며 서적 이상의 것을 직접 알아야합니다. 이 책은 여러분을 안내하고, 정보를 제공하고, 훌륭한 참고 자료 또는 참고 자료가 될 수 있습니다. 그러나 그것의 거침은 모두 경험과 수년 간의 지식에서 왔습니다. 네가 전에 들었던 걸 안다. 그렇다면 시스템 관리자는 무엇이 필요합니까? 글쎄요, 실제로 수년간의 경험이 필요하지만 처음부터 열정을 가져야합니다. 시스템 관리자의 작업을 즐기십시오. 시스템 관리자의 일상적인 운영은 문제 해결, 문제 해결, 유지 보수, 설치, 구성 및 광범위한 시스템 관리로 구성됩니다. 기술 수준과 취향에 따라 Linux, Unix, Windows 등 여러 운영 체제를 지원할 수 있습니다. 문제의 진실은 추구 할 때 항상 편안 영역 내에 있어야한다는 것입니다. 고용, 당신은 당신이 당신의 전문 분야에서 진보 할 때 항상 더 진보 된 기술을 가르 칠 수 있습니다. 귀하의 안락 지대에 머무는 것은 당신을 귀하의 도메인의 마스터로 만들고 관심있는 새로운 길로 당신을 드러 낼 수 있습니다. 무료하드디스크복구프로그램 하드웨어 문제, 서버 유지 보수, 데이터 또는 시스템 복구 등을 시스템 관리자가 매일 발생할 수있는 문제로는 시스템 관리자가 모든 컴퓨터 시스템을 가동 상태로 유지하는 것입니다. 항상. 또한 장애가 발생하기 전에 시스템을 모니터링하고 각 시스템의 소프트웨어 및 하드웨어에 대한 예방 유지 보수를 수행하는 것은 사용자의 책임입니다. 적절한 파일 구조, 보안 권한, 시스템 유지 관리, 시스템 인벤토리 및 시스템 기능을 유지 관리해야합니다. sys 관리자로서 당신은 최전선의 방어자이며 조직을위한 소프트웨어 및 하드웨어 구매를 권장해야합니다. 아이폰데이터 복구업체 향상과 도구 및 자원의 가용성이 증가함에 따라 sys admin 작업의 소모율 및 스트레스 수준은 크게 감소했지만 sys 관리자의 작업에는 여전히 시간이 걸렸으며 여전히 스트레스를받을 수 있거나 압도적으로 많습니다. 초보자. 작업장에서 스트레스를 완화하거나 줄이는 가장 좋은 방법은 휴식을 취하고 생각을 풀는 것입니다. 마음을 자유롭게하고 숨을 크게 쉬십시오. 신선한 마음으로 작업 영역으로 돌아와서 한 번에 하나씩 문제를 해결할 준비를하십시오. 컴퓨터는 인간이 지시 한 대로만 수행 할 수 있다는 것을 항상 기억하십시오! 따라서 문제가 있다면 논리적으로 생각하면 답을 찾을 수 있습니다. 컴퓨터 자체에 대한 생각이 없기 때문에 논리적 인 문제로 인해 실패 할 수 있습니다. 휴대폰 내장메모리 복구 않으며 모든 조직에서 개선의 여지가 있습니다. 대부분의 회사는 바지 시트로 날아 들고 있으며 시스템 관리자의 자비를 베풀고 있습니다. sys admin은 일반적으로 회사의 보이지 않는 힘입니다 (고객 / 고객의 관점에서 볼 때). 왜냐하면 sys admin이 백그라운드에서 백그라운드에서 작동하기 때문입니다. 보이지 않는 힘이지만, 회사의 성공과 전반적인 존재는 시스템 관리자의 작업에 크게 달려 있습니다. 회사의 컴퓨터 시스템을 적절하게 유지 보수 및 관리하지 않으면 해당 회사의 위치를 ​​알 수 없습니다. <!-- x-tinymce/html --> Best Android Data Recovery Software issued very encouraging data this week, which should give a boost to many of the companies developing their uranium properties in the United States. Front-runners, with the more solid outlook, include Strathmore Minerals (TSX: STM; Other OTC: STHJF), UR-Energy (TSX: URE), Uranerz Energy (OTC BB: URNZ) and Energy Metals (TSX: EMC). The U.S. government’s uranium annual report should also help bolster the aspirations of the more speculative uranium explorers and developers we have previously written about, including Kilgore Minerals (TSX: KAU), Max Resources (TSX: MAX; OTC BB: MXROF), and Northwestern Minerals Ventures (TSX: NWT; OTC BB: NWTF), which also plan to explore their U.S. uranium-mineralized assets. <!-- x-tinymce/html -->Data Recovery Services powerhouse producers, such as Athabasca and Australia, the EIA report reminded us of the production capacity of the various U.S. facilities. While U.S. utilities require between 50 and 60 million pounds of uranium to fuel reactors, the domestic uranium industry is producing but a fraction of what is needed. Total “existing” production capacity from permitted In Situ uranium recovery stands at 8.8 million pounds annually. U.S. utilities need to begin looking beyond next year’s annual report. The time is now to foster and encourage the small domestic uranium industry before everyone but the United States has available uranium supplies to power their nuclear fleets.
2019-04-25T17:43:21Z
http://datarecovery8.soup.io/
Ruminations is a new category we’ve recently added here at The Well Report. Its content will consist of my sundry thoughts. God Bless and happy reading! My definition of what it meant to age well was shallow. To me it meant maintaining aesthetic beauty. Over time, I discovered it was not limited to physical appearance, but included quality of life and matters of the heart. Sophia was an older woman I’d met in church over 25 years ago. She had a pretty face, wore her hair dyed jet black and still had an amazing figure. She was a very popular middle school teacher whose, zany persona reminded you of Lucille Ball. Sophia attracted young people everywhere she went. It was nothing for her to hop on a kid’s skate board in an attempt to ride or giggle with some young girl about a secret crush. Most of us attributed her energetic qualities to her relationships with young people. We thought this was her secret to ageing* well. She was someone I regularly talked to on the phone. We’d mostly discuss her many sons and daughters who were actually former students and church members. After knowing her for about ten years, one day she said something interesting: she told me she had talked to her real daughter. She said it in a very casual way and I didn’t probe. Over the next few years the story unfolded. Sophia had gotten pregnant while in college. Her daughter Nancy was raised by her mom. On numerous occasions she would recount incidents where her colleagues discovered that she had a daughter out of wedlock. Even though Nancy was now a beautiful successful principal at a local elementary school, these events still seemed to haunt Sophia. Fast forward 12 years later, Sophia was still vibrant and attractive. She talked about Nancy slightly more over the years but to very few. I was who she confided in, when Nancy had been diagnosed with brain and lung cancer. She was sent to live with Sophia. She cared for her briefly before sending her to live with her brother and his wife. Nancy died a short time later. At the funeral, Sophia still worried that people would somehow discover Nancy was born out of wedlock. This made her uneasy. At the time of Nancy’s death they had not completely reconciled. One day, I stopped by Sophia’s house for a visit shortly after Nancy passed away. I almost did not recognize her. Her body was frail, her skin was ashened and wrinkled. Her voice sounded gravelly and weak. She could not finish her sentences without a hacking cough. Her decline was shocking. Sophia claimed not to know the nature of her illness. A short time later she died of brain and liver cancer, just as Nancy had 17 months earlier. Over the years she admitted to bitter and unresolved feelings and had plenty opportunities to resolve them. I learned from her the importance of making peace with your past. Doing so will prosper you on the road to ageing well. Had the awesome privilege to be the featured blogger this week for Beyond Baby Mamas. Check it out! Today’s Community Blogger is Tamara D. Brown. Tamara is a social research interviewer by trade and a minister by vocation. She blogs at The Well Report. Tamara became a single mother in 1979. She was 19. She married 11 years later. She is the mother of one daughter: BBM’s founder, Stacia L. Brown. Stacia sat down with her mom and asked her a few questions about her early single-parenting experiences. Here’s what she had to say about sacrifice, the importance of finding trustworthy caregivers, and determining your child’s love language. 1. Take us back to the day you discovered you were pregnant. What do you remember about it? I knew from the moment of conception. From that day on, I would not take any medication, not even an aspirin. Then when I felt the first flutter, I called you Stacia. It was surreal to hear Shara’s name come out of Deana’s mouth, when I asked how she met Billy. I knew she knew of her. Who didn’t? Once, I was on an online message board and one of the subjects was North End Landmarks. There, listed among historical monuments and turning points in the city’s history, was Shara’s full name. She was the only person listed. I tried to email the original poster of the message but he didn’t reply. Was he a victim of her wrath? Or someone who tried to love her? I was living in an apartment not far from here. Billy and Shara lived upstairs and Shara’s cousin Robin lived downstairs. Robin and I were really cool and I used to hang out at her place quite a bit. One day she said, ‘Shara don’t be treatin’ her man right. He’s a good man too. I’mma hook ya’ll up.’ So that’s how it all started. Me and Billy used to meet up at Robin’s place, mostly to talk. Not long after we started seeing each other, I decided to move and I asked him if he would come with me. He said yeah. I was really scared of Shara, but I didn’t let her know it. ‘Cause, girl, she is really crazy…you heard what she did to that one man, right? I could not believe my good fortune I was finally going to find out more about Shara and “what she did to that one man.” That became a catchphrase that you’d hear sprinkled in adult conversation. It would always be followed by a grimace or a head shake. His name and how he encountered her wrath was a mystery to everyone. “Well, what did she do to him?” I asked casually as we headed toward the highway ramp. I had hoped that Deana didn’t noticed how perked up my ears were or that I was starting to drive faster than usual. Billy had bought Deana her dream car, an olive green Volvo sedan. It was her pride and joy. Aside from Billy, I was the only other person she had ever let drive it. Girrrll! She lured him into the hallway of an apartment house near Cabbage St. He owed her some money, right? Once he got inside there, he said he wasn’t going to give her nothing, right? She pulled a big piece of wood off the stair case…girl it had the nails in it and everything and beat him half to death. To this day, he’s still messed up. I wondered if he was the original poster of the message board. Anyway, one night we were inside of our apartment, right? We had moved down to Sands. Somebody was knocking on the door, right? I looked through the peephole and it was Shara! I was so scared, right? I told Billy he had better get her away from outside my door…’cause you know I’m scared of her but I won’t let her know it. Deana’s comments about how afraid she was of Shara would always be followed by a nervous chuckle. I couldn’t imagine Deana being afraid of anyone; however, with this, she convinced me. “I been knowing Shara since I was a little girl. Billy went outside there and I heard her tell him to come home with her and he said, ‘Nah this is where I’m at now.” Girl, then it sounded like they was fightin’, right? And Billy came back in the house and said Shara had cut him on his face. Girl, blood was everywhere! I remember several times wondering about the deep gash on the side of Billy’s nose. I never asked how he got it. It had this crisscross look, like a carving. It looked deliberate. Deana said Shara called the police and told them something about Billy having warrants and he got arrested. Shara ended up leaving before the police got there, and they never saw her again. Billy told Deana that night he was convinced he would be with her forever. My mind couldn’t help but drift while Deana was talking. I could see Shara in my daydream heading towards Albany Avenue alone in the dark. It was the street where Miss Mabel’s church and the Rockabye bar were located. The avenue where folks poured out their sorrows in one way or another. I know what she did was terrible but at that moment, I really felt sorry for her. I don’t think she meant to hurt Billy when she cut him. I believe she just wanted to leave her mark, an insignia that said, Remember, you loved me first. Deana reminded me again that she remembered Shara while growing up and how the tales of her always frightened her. Each time she mentioned her name, she’d scan the outside of the car, like she expected Shara to jump in front of it. She told me she knew a girl who knew Shara’s whole family. Deana said that woman told her a story of alleged abuse that Shara experienced as a child. It was said that the acts were only directed towards her. I won’t record the details of the abuse, but after hearing it, I was certain I had discovered the root of her rage. “You know she had those other two kids, right?” Deana continued as we pulled into her driveway. It was a known fact that Shara had another set of children at a very early age who were given to some else to raise. Deana said the woman told her that it was rumored that Shara’s father, Mr. Neil, was also the father of the children. I felt a twinge of sickness in the pit of my stomach. Hearing the news was almost too much to bear. I couldn’t help but think back to the expression on the faces of her family members when anyone mentioned her name. Now I realize, it was a look of pity and shame. “Guess that’s how she got that way,” Deana said, shrugging her shoulders as we headed towards the front door. I tried to share the story with Deana about how Shara had asked me to attend church with her when I was a child. As with everyone else her reaction showed indifference. I thought about how pristine Shara’s childhood home was. I could imagine them scrubbing and cleaning hoping the secret stain would be washed away. But like grape juice splashed on clean white linen, it was be next to impossible to remove by human effort. I thought of times I’d watched her attempt to reach for normal. That day I understood, if these accusations were true, nothing in her foundation could ever support that stand. For several days, the flames of Shara’s rage kept coming to me. Sometimes fire purifies; sometimes it damages. Please stay with us for the conclusion, Grapevine, Part 3. Life Through Me is a series of autobiographical vignettes, featuring various people and experiences I’ve encountered. The first section, Life With Billy was about my relationship with my favorite uncle. The second section Billy’s Girlfriend and Shara’s Song explored his turbulent relationship with his ex. Grapevine is the three part conclusion to this segment. Hope you’ve enjoyed the series! Because my life had always been interwoven with Billy’s, I would periodically make trips back to North End to visit. My main reason for returning was because I was curious about his new girlfriend, Deana. On the surface, I had grown indifferent about my quest for Shara, partly because Billy stopped mentioning her and also because I was almost certain it was her I had seen the last time I was in town. If that was truly Shara I had seen, when had she become someone who would deny who she was? I was disappointed in her. But another part of me was still perplexed about how she had come to be. Why was she so different from her siblings? What was the source of all her rage? Would I ever solve these mysteries? I don’t quite remember the first time I had actually met Deana but I do remember my reaction to her. I was as befuddled about Deana’s arrival as I was about Shara’s departure. Staring at her standing slightly behind Billy that day, I tried hard to find something about her that appealed to him. Billy always liked flashy women who seemed to live life in the fast lane. Deana was as plain as Shara was ornate. She held her head down as if she was waiting for some type of approval. Meeting Deana caused me to reminisce about the day I first laid eyes on Shara. I couldn’t have been any more than four years old, but I remember the day I stood there looking at this bejeweled woman and listening to Billy announce her name with pride. There was something strange about her, perhaps it was the full cast she had on her left leg. I think the story behind that was that she had gotten in a fight with some man and her leg was broken in the process. She strutted around in the cast almost like she did in high heels; acting as if she had known us for ages. Nothing hindered her. I remember wondering why would someone want a girlfriend with a broken leg. It was like having a damaged doll. Why won’t Billy take her back and get another one? my four year old mind wondered. Looking at Deana the day I met her, I could see brokenness but not in her limbs. Something seemed to be broken in her spirit making her appear disjointed. It was as if she had to heal on her own. There was a shyness about her, you could tell that by the way she cocked her head to the side. Still there was something in her stance that let you know she was the gatekeeper of her soul. I could tell that Deana was a perfect blend of raw city grit and southern charm. She had created this persona and she seemed determined to hold it together. Deana was the color of coffee beans roasted in the blistering Hawaiian sun. You got the impression that your experience with her would be strong and bitter. However, it didn’t take long to realize that she was guarded with her true personage. She reserved it like fine dark chocolate you keep for special guests. As time went on her demeanor lightened her eyes and her smile sweetened, and before long, like a great cup of coffee, she’d warm and stimulate you. Deana and I became fast friends. I liked watching her cater to Billy. It was nice to finally see someone caring for him. He was 16 years older than her. He became the father she needed; she was the daughter he wanted. She had a girly cackle that he loved. I realized it the day she told me he had sent a clown with balloons to her job on her last birthday. “Tahamee, you should have seen it, right? That clown came inside there and just started singin’,” she chuckled as Billy stood beside her, smirking. Deana had a funny way of talking. She would always use unnecessary words in her sentences. It was as if she thought by doing so she’d add strong emotions to her statements. Theirs was a love that we all watched unfold over time. Even though their relationship was being perfected, they both seemed to be self medicating something. She was a shopaholic; he was a drinker, and they both were chain smokers. Still they were steadily becoming one of my favorite couples. Billy was starting to lose that worried look he had when he was with Shara. Our family adored her and I liked to see my grandmother purse her lips and relax her shoulders when she’d announce to the family they would soon be visiting. She seemed pleased that her oldest son managed to find a woman who had her domestic abilities. It didn’t take long for my friendship with Deana to progress to sisterhood. I was honored, knowing that this had only happened because she allowed it. It was through her I learned how to truly love and care for a man. Some things she told me, some I learned through observation, eventually watching Billy decide to stop drinking and marry her. As with any bond, Deana and I shared take-to-your-grave secrets and had days of strife. If you were someone she had newly trusted, she would sulk over an offense and give you the silent treatment. She could ignore you for hours or days, periodically glancing at you, making sure you were still nearby. Our relationship was on the mend when Deana entered treatment for a terminal illness. When I first heard the news, it was as if someone had placed a huge bowling ball on my chest. It was a crushing pain. Soon as I heard she had started daily treatments at the hospital, I rushed backed to North End to be by her side. Though she was always a woman of faith, there was a sadness about her now. It was difficult listening to her conversations. They were always worded in a way that let you know that she had accepted her condition and was just sitting around waiting for her day to die. I tried to do everything I could to boost the morale in their household. I would take Billy out for long drives. Billy and Deana had moved to the suburbs so we’d ride down our old street to look at 105, which was now overgrown by weeds and had a huge sign with cement blocks, threatening trespassers. I would get out and talk to those who still lived in the the old neighborhood. He would stay in the car with his arms folded saying, “Tam, these young thugs don’t care about us old timers anymore.” I thought visiting the old neighborhood would cheer him up, as he liked to recall his days of street life. Closing the book entitled North End, I placed it on the lowest shelf in my mind. I wanted it to grow dusty and old. Hopefully one day it would disintegrate. What purpose did that place serve in my life? Why were my experiences there so ugly? So painful? Will I ever have happy endings? I thought one day. It wasn’t hard to make new friends in the Midwest. Before long my days were filled with football games, quarters parties and trips to the mall. Almost as swiftly as the seasons changed, Junior high days turned into High school days and my social life went into high gear. I dated. A lot. It seemed that for several years straight I always had a love interest but no love. Nobody ever sparked my happy butterfly. A few times during my high school years, I went back to North End, although I avoided all the old landmarks. I’d heard there had been an exodus and all our old neighbors had moved to the suburbs or the outskirts of town. Like a cancer, our beloved building 105 was now totally ravished with drugs and crime and had even claimed the life of Mr. Doyle, our landlord. 105 was his pride and joy. We were always taught to be polite and respectful to him and his property. It was not so with the new tenants . He was lured in the basement around the first of the month, robbed, and shot, execution style. I still feel a pang in my heart when I think about it. Most of the former residents will not even go down that street let alone talk about living there. During one of my visits to North End, I was able to catch up with Debbie. We went out one night and she told me that Derek had been asking about me. I blinked extra hard as I felt my happy butterfly dance a jig. “Oh?!” I replied trying real hard not to seem too eager as the last words he had spoken to me still stung a decade later. “Yeah, girl, he said next time you come to town he wants to see you” Deb said nonchalantly. She always said things in a way to let you know there was something else on her mind. She wasn’t impressed after all; he was her first cousin and not a big deal. To me, it was everything. I’m not sure how it all happened but it wasn’t long before I was making regular trips back east. Billy had moved back and was living with his new girlfriend Deana. I laugh now about how I would tell Billy I really wanted to visit but it was Derek I wanted to see. Derek still had his boyish good looks. Even though there was more thug now than Alex Vanderpool, he managed to maintain both images. I loved spending time with him when I visited. We would go to clubs and restaurants, visit friends in North End or stop by to see his family as we tried to make the most of our brief time. We were both happy to finally see what it felt like to actually date as adults. He was living with an older girl who I had met as a child. She lived around the corner from Miss Mabel’s. I never even bothered to probe to find out more about his relationship. I was too busy enjoying my first love. During one of my visits, Derek and I had planned to meet up downtown. Not sure why since normally he would pick me up at Billy and Deana’s place. They live in the Sands, a high rise apartment community on the outskirts of downtown. Pacing back in forth as I waited, I noticed two women who were the same height coming up the block. They stood out because they had on too many clothes for such a warm spring day. My eyes focused on one of them. She had caramel even- toned skin and a bloated look that made it seem like if you stuck a pen in her she would deflated draw in like a raisin. She didn’t look like anyone I knew, yet some about her was familar. “Shara!” I yelled jumping in front of her. Her eyes locked with mine. I saw fear in them. “Uh uh.” The old Shara I knew was a quick liar. I was certain this was her. Her clothing made me second guess myself. The Shara I used to know was a sharp dresser, This woman’s clothing was dark and made her look frumpy and old. After interrogating her for a few minutes, she continued to deny that she was Shara. Something in her eyes was pleading with me not to ask any more questions. The other lady walking with her, had moved over to a grassy patch under a lamp post, stayed within earshot. She had dropped her head but would occasionally dart her eyes in my direction. Her body language revealed to me that my assumptions were correct: I had indeed found Shara. Feeling helpless and unable to come up with any more questions, I had to let her go. Moving aside, I watched as she and the other woman, sluggishly walked down the hill, to the heart of downtown. I would recall this story to Billy many times over the years, wondering if that was really Shara. He would always intently listen, as if he were waiting for me to reveal some missing part of the story. Mentions of her would soften his face, saddening him. As a dazzling firecracker, that lights up a summer sky, but swiftly turns to a downward fizzle, so did my relationship with Derek. We were worlds apart. He loved street life more than anything. I could see the rush in his eyes when he talked about the cops chasing him, where he stashed his wares or how he beat some charge. I wanted a simpler quiet life, to be married one day and perhaps to have more children. I could never imagine living back in North End; he couldn’t imagine life anywhere else. I remember our last night together. After a night on the town, we went back to Billy and Deana’s and decided to take a stroll around the complex. I don’t know if it was actually said, but we knew we probably wouldn’t see each other for a very long time if ever. With the bright downtown lights as our back drop, we paused by a wooden utility pole. Derek removed a knife from his pocket and carved, Derek and Tam forever in the pole encasing it in a heart. “I’ll never get married until I hear that you are,” Derek said as we headed to the front door. I knew he was probably lying but I loved the sound of it. It was a corny line, I know, but there was nothing lame about what we felt. It was as if Derek was determined to give me a complete first love experience. Something that would warm my heart causing my butterfly to go into a frenzy. I haven’t seen Derek since that night. I did however receive a call from him a few years later. I was living here in Maryland and he wanted to see if we could give it one last try. He was even willing to meet me half way, by moving to Maryland. Who knows for sure if he was really serious? He may have been sincere or perhaps just wanted to move his “operation” to my region. I told him that I was engaged to someone else. I wanted so badly to throw caution to the wind and follow my heart, allowing my butterfly to soar like it was meant to. I chose to stay with the man I was engaged to. Our relationship ended a few months later. I don’t regret either decision. Through Derek I learned your happy butterfly is eternal. There’s more, please keep reading with us as we finally learn more about Shara! As much as I tried to, I could not recall having any interaction with Shara after going to church with her. It was almost as if a scene had ended in a play I had been watching. The lights dimmed, the curtain dropped, but there was no applause. How did this drama end? Would I ever find out what happened to the protagonist? Over the years, I had hoped so. Part of the reason I lost track of Shara was because we had moved back to the midwest where I found Billy despondent. I don’t know why Mom and I relocated when we did, but for me it could not have happened soon enough. My mother was in a tumultuous relationship that I hoped day and night would end, and I was experiencing puppy love which ended with me being dogged. His name was Derek. He was the cousin of my neighbor Debbie who lived in 105. One Saturday, I was pretty bored and thought I would go visit her. There was never a dull moment at Debbie’s place. Their living room was always filled with drug addicts who were heading to or fresh out of rehab. Some were relatives, and some were friends, who would congregate in their front room discussing court dates and methadone dosages. “Immo get it together, Miss Sadie; next time you see me, this monkey won’t be on my back,” someone would say. “Well, I sure hope so!” Miss Sadie, Debbie’s mom, would always reply with a sigh, as she rolled her eyes towards the ceiling. Miss Sadie used to own a bar on the avenue similar to the one Billy would take me to. She had become ill not long after opening the bar and was disabled. Now an arm chair psychologist, it seemed as if every addict or hustler had followed her from the avenue to her apartment for advice. So there I would sit for hours listening to Miss Sadie dishing out home spun wisdom. She knew every hustler, street urchin, or prostitute who had ever strolled the avenue and could go back three or four generations giving details of their family history. Some days she would just sit, chain smoking Camels, and school me on the ways of street folk. “Now a wino will be honest with you, Tam. When he’s drunk, he will spill his guts and admit that he is an alcoholic. But there is something about dope that makes people lie,” she said one day as her voice trailed off. We sat silently as if she needed time to think about the many times she had been lied to. I had expected the day that I met Derek to be like any other visit to Miss Sadie’s and Debbie’s apartment. I could already hear the rumble of simultaneous conversations as I approached their door. Once inside, I gave a swift greeting and slipped into a nearby chair like I was attending a community meeting. The visitors that day didn’t look like folks I had seen in North End before and they weren’t: at least not anymore. They were actually relatives of Miss Sadie’s. Her sister Juanita and two of her children, Angela and Derek. Juanita and her husband Ron owned a night club on the other side of town. Determined to free their family from North End, they scrimped and saved until they were able to move to the side of town where I had visited Shara’s family. “Oh Tam, I want you to meet my sister Juanita and her…” was all I heard Miss Sadie say that day, once I laid eyes on her nephew, Derek. He was a slender built cinnamon colored young man with soft brown hair and a lazy eye that gave him a dreamy look. Derek was the coolest guy I had ever seen. His clothing was preppy and polished like Alex Vanderpool’s, but something underneath that image screamed young thug. I was completely smitten. That day it was as if I had swallowed a happy butterfly that never wanted to be free. It was content just fluttering around in my stomach. My first reaction must have been very obvious as I caught a glimpse of Miss Sadie and her sister exchanging glances and giggling. She would tease me about that for years to come. After a brief visit, I excused myself and headed back to our apartment. I felt like if I didn’t leave soon that happy butterfly was sure to fly right out of my mouth. Plus, staring at Derek and his sister’s clothing was starting to make me feel like Cinderella before the ball. My pastel colored sweat shirt and jeans were no match for their attire. Derek was wearing a pair of really nice dress pants with a plaid shirt, a v-neck angora knit sweater, and a black leather jacket. Angela’s outfit gave her a militant, defiant look. Her huge curly afro couldn’t help but compliment the brown suede fringed jacket and mini skirt she was wearing. Her enormous hooped earrings, choker and high heeled suede boots made her look confident and powerful. Instantly, I wanted to be like her. Something about their apparel made me think of how they must have chosen their images carefully and how hard their parents must have worked to help them maintain those images. Over the next few months, Derek would make several visits to 105. Sometimes with his mom but most of the time alone. For fleeting moments I would allow myself to think that his frequent visits to North End were because of his growing attraction to me, although deep down I knew they weren’t. Juanita and her husband had prepared a banquet table for their children, allowing them to feast on everything their new money could provide. North End had already whetted Derek’s appetite and he could not stay away. Derek’s real reason for visiting North End didn’t matter to me; I was just glad when he did. I’d looked for reasons to be around when he was visiting but I’d beat up on myself because I couldn’t think of anything to say. Being around him made me shy and awkward. There were plenty boys who liked me in our neighborhood but none as intriguing as Derek. Is he shy or just trying to play it cool? Does he even like me at all? Does he think about me as much as I think about him? I’d think as I wrote our names, encased in hearts on every flat surface I could find. Whenever he visited, we would engage in some conversation and I even think he had called me a few times. Still I wanted more of his attention but wasn’t sure how to get it. When I wasn’t pining over Derek, I would occasionally think about the girls he must be meeting in his neighborhood. Girls who sparkled more and always knew the right things to say. Those thoughts usually made me more uneasy, and it would show whenever he came around. It wasn’t long before he stopped visiting 105 altogether. Debbie and her family had moved to a housing project across town and I hardly saw her. One day I got a call from Debbie. “Derek is having a party this weekend. Why don’t you see if you can come over and maybe we can go to the party.” she said. I was all for it. I knew getting a ride would be difficult for us as neither of our parents had cars. Saturday night came and as we predicted, we couldn’t get rides to the party. This didn’t seemed to bother Debbie at all but I was very disappointed. I begged her to call him. I knew I was getting ahead of myself and my emotions were overriding my judgment but I had to find out how he really felt. Mom had already announced that we would be moving to the Midwest any day now. Before my life in North End came to a close, I wanted it to have a happy ending. Debbie finally gave in and agreed to call. “Oh somebody wants to speak to you….Tam” I heard her say before handing me the phone. “Hello?” I said with a nervous giggle. I could hear that the party was in full swing. The music was loud and bumping and the people were louder. It sounded packed. I had hoped that perhaps Derek would want us to come once he heard that we weren’t able to, that he would even ask his parents or one of his older siblings to pick us up. Quite the contrary. After a few minutes of awkward dialogue, he said these words: “I have to go! I’ll talk to you when the time comes….if the time comes!” That’s all I remember about that night. I’m not sure if he hung up on me or if I laid the phone down or told Debbie. All I knew that night was the chapter of my life that had taken place in North End was over, and hopefully, I’d never have to read it again. Little did I know years later, I’d be compelled to. Although it was a warm spring day outside, there was something about the inside of Shara’s church that reminded me of autumn. It had the feel of a season where things that were once alive and flourishing were now cold and dying. As Shara and I were walking to church, I was wondering if the service was going to be more like Miss Mabel’s church, the new babysitter Mom had found for me. Miss Mabel, whose chestnut brown skin and long shiny black hair made her resemble a thinner version of Mahalia Jackson, was what we called ‘sanctified’ back then. She attended the holiness fire-baptized church down on the avenue. I remember the day I went to her house with Mom to talk to her about babysitting me. “Well, you have to bring her dresses…’cause we go to chuuurch,” she said as they sat there discussing times and prices. It was something about the way she said church that made me think we would be there for a very long time. It wasn’t long before I was attending the nightly revival services along with Miss Mabel and her eight kids. We’d take up a whole row of the tiny storefront church. The hot bright lights and mahogany wood paneling on the walls and floors made the church look like the basement of someone’s home. There were no frills in the sanctuary or the parishioners, as if any decor or adorning had to be left on the other side of the threshold. Still, the tiny holiness church spared no expense on the instruments. The state-of-the-art organ, drumset and microphones were quite similar to those I had seen in the Rockabye Bar, down the street. We would watch the ushers “nurse,” as we called it, fanning heavy laden souls and snatching babies or eye glasses from the folks so they could feverishly dance around uninhibited. Our prattle couldn’t be heard over the loud shaking of the tambourines, or the saints picking them up and putting them down on the hard wood floors. The holy dancing sounded like the marching of a well disciplined army. They were soldiers indeed. Eventually, without being told, I’d learn to sit still in the house of the Lord, realizing it was the place where God lived. There I’d sit in taupe colored folded chairs, watching bodies fall around me, slain by the power of God. During one service, a man fell in the aisle to my right. His glassy eyes were fixated on the ceiling as his mouth foamed, eventually muffling his hallelujahs and cries to Jesus. “Those people have the whitest teeth. I guess it’s all that foamin’ at the mouth,” I remember Mom saying one day. I’m not even sure why she was telling me that. She had a way of telling me things that went way over my head,kind of like her comments were meant for grownups but since there weren’t any around, I’d just have to do. I sat there, calmly peering down at the man stretched out on the floor, remembering Mom’s description but beginning to comprehend on my own what was taking place. I understood that this was one of many ways that people served and sacrificed to God. Over time, I also noticed the strength and power He gave them in exchange. As soon as we entered the sanctuary of the church I was attending with Shara, I realized there would be a vast difference between Miss Mabel’s church and Shara’s church. Still, I knew God had more than one household and I need to be still while visiting Him. As soon as Shara wearily sat down on the pew that Easter sunday morning, her shoulders drooped forward and she sobbed. She cried so hard I thought she was going to break in two. Her whole body shook, especially her stomach. The tears were coming from a deep cavity in her soul that most people didn’t think she had. She was equipped with an endless amount of tissue. As soon as she soaked through one she would simultaneously pull out another. I had never seen anybody cry that hard for two solid hours let alone be silent while doing it. She didn’t even look like herself. With her black mascara collecting at the bottom of her lashes, her sharp ebony eyes resembled muddy puddles in a rain storm. I didn’t even know she knew how to cry. At one point she just covered her mouth, as if she wanted to throw up. I felt so bad for her. Shara’s church wasn’t the kind of church you cried in. It was the type where folks left their tears at home, only showing up there to prove how successful they were. Still it was the only place Shara knew. Nobody looked at her. She was the only one crying. No usher came over to her with tissue or a hug. No minister or deacon approached her to pray. Instead Shara worked it out the best way she knew how. Although she was not physically leaning on me, I was starting to feel heavy and helpless. It felt like the weight of her burden and the flood of her tears would eventually swallow me up. Why didn’t I ask if we could go to Miss Mabel’s church? I thought. I wasn’t certain if that was the right place for her or not. Her passion didn’t match theirs, but any place would have been better than where we were. One thing I was certain of, Shara handpicked me to be there with her that Sunday morning; I felt it. I’m not sure why she chose me to go with her that day. Could I have been the only one who wouldn’t judge her or think that her tears weren’t real? Maybe she saw me as someone who would use all the grey matter they could muster to try to comprehend her pain. Who knows for sure, all I know is Shara could be full of drama, but this was no performance. This was the real her. I don’t remember anything said in the sermon. The message and the announcements had the same tone. There was no soul stirring music. It felt more like we were there to mourn the dearly departed rather than celebrate Someone who triumphantly rose. The only offering I remembered that day was Shara’s tears. If any of her family members were there for the Easter service, she didn’t greet them, and they didn’t come over to her. “Well, let’s go,” she said with a faint smile as soon as the service ended. I noticed it wasn’t hard for her to rise to her feet. She was light as a feather. “You wanna go get some ice cream?” she said as we headed towards the door. Lincoln Dairy a local ice cream parlor was across the street. Everybody went there on Sundays. We crossed Main Street to go to the dairy never discussing the church or the service. Once we placed our order, we sat there eating our ice cream chattering about nothing in particular. I kept trying to search her face, hoping to get a clue of what was going on inside of her. Her dark eyes were back on duty, darting around the ice cream parlor. The only thing I remember about the walk home, was how brightly the sun was shining on us.
2019-04-26T04:41:08Z
https://thewellreport.wordpress.com/
Hair follicles are spaced apart from one another at regular intervals through the skin. Although follicles are predominantly epidermal structures, classical tissue recombination experiments indicated that the underlying dermis defines their location during development. Although many molecules involved in hair follicle formation have been identified, the molecular interactions that determine the emergent property of pattern formation have remained elusive. We have used embryonic skin cultures to dissect signaling responses and patterning outcomes as the skin spatially organizes itself. We find that ectodysplasin receptor (Edar)–bone morphogenetic protein (BMP) signaling and transcriptional interactions are central to generation of the primary hair follicle pattern, with restriction of responsiveness, rather than localization of an inducing ligand, being the key driver in this process. The crux of this patterning mechanism is rapid Edar-positive feedback in the epidermis coupled with induction of dermal BMP4/7. The BMPs in turn repress epidermal Edar and hence follicle fate. Edar activation also induces connective tissue growth factor, an inhibitor of BMP signaling, allowing BMP action only at a distance from their site of synthesis. Consistent with this model, transgenic hyperactivation of Edar signaling leads to widespread overproduction of hair follicles. This Edar–BMP activation–inhibition mechanism appears to operate alongside a labile prepattern, suggesting that Edar-mediated stabilization of β-catenin active foci is a key event in determining definitive follicle locations. Periodic patterns are a recurring theme in anatomical organization. Examples in diverse organisms include insect bristles, mammalian hairs, the location of leaves on a plant, and the location of stomata on those leaves. In all of these cases the position of each element in the pattern is defined relative to the others rather than to an absolute anatomical location. With regular patterns found so widely in nature, a key question in developmental biology is how an ordered array of structures can be generated from an initially homogeneous field of cells. In general terms, such patterns can be generated by using two signals with different ranges of action (1, 2). These activation–inhibition systems rely on an activator that promotes (i) its own synthesis, (ii) assumption of a given cell fate, and (iii) synthesis of an inhibitor of this fate. Crucial in generating a spatial pattern is that the activator acts locally, whereas the inhibitor acts at a distance from its site of production. These types of molecular interactions are predicted to be capable of generating a periodic pattern by amplifying stochastic asymmetries in initial concentrations of activator and inhibitor (1). Cells on the surface of mammalian embryos are competent to become either hair follicle or surface epidermis. They coordinate their fate choices to yield a stippled pattern of follicles, relying on communication within the skin rather than any external positional information. Recombination of epidermal and dermal components of embryonic skin established that communication between these cell layers is absolutely required for initiation of hair and feather development and that the dermis is responsible for inducing morphological changes in the epidermis (3, 4). Many molecules that play a role in hair follicle development have been identified (5–8), but the regulatory relationships between signaling pathways involved in this process are largely unknown. This is a particularly important problem because it is the interactions between molecules, rather than the intrinsic function of any individual gene product, that is responsible for orchestrating pattern formation. One such signaling pathway, composed of the extracellular ligand ectodysplasin (Eda), its receptor Edar, and its cytoplasmic signaling adapter Edar-associated death domain (Edaradd), is required for development of a specific subset of hair follicles. Mutation of any of these three genes, all of which are specifically expressed in the epidermis, causes identical ectodermal dysplasia phenotypes in mouse and human (9–12). This phenotype includes a complete absence of primary hair follicles, which in mouse form between embryonic day (E) 13 and E16. It appears that Edar mutant epidermis retains its naïve state until E17, when secondary follicles begin to develop (10, 13). Secondary follicle formation has a distinct genetic basis, with mutations in Noggin (14) or Lef1 (15) allowing primary hair follicle initiation but blocking that of secondary follicles. Here we study the role of the Edar pathway in follicle patterning using embryonic skin cultures. The culture system allows experiments of short duration with a defined start point. This feature is particularly important for studying signal responses because it can distinguish the proximal effects of an experimental manipulation from those that are a secondary consequence caused by alteration of cell fates. We find that spatial organization in the epidermis is achieved by modulation of signal receptivity, with Edar–bone morphogenetic protein (BMP) activation–inhibition interactions driving the patterning process. Restriction of Eda Responsiveness Regulates Hair Follicle Density. Although activation–inhibition systems are generally predicted to rely on differential ligand availability, the ligand in this system, Eda, is a poor candidate for conveying positional information. Eda is widely expressed in the epidermis (13) and, when applied in a diffusible form, allows pattern formation in culture (16) and in vivo (17). Consequently, we considered dynamic Edar expression as a means to generate a punctate cell fate pattern. Before hair follicle initiation Edar is evenly expressed through the skin, but as a pattern emerges it becomes up-regulated in follicle primordia and down-regulated in surrounding cells. This dynamic expression itself depends on Edar signal transduction because it is not observed in Edaradd −/− embryos (Fig. 1 A). Thus, as patterning takes place cells display one of three expression states; those with undetectable Edar are likely excluded from a hair follicle fate, those with high-level expression are committed to this fate, and those with moderate expression remain competent to assume either fate. Quantitative RT-PCR (qPCR) revealed that skin of Edaradd −/− embryos expresses the same amount of Edar as that of their heterozygous littermates (Fig. 1 B). This result illustrates that while pattern formation dramatically reorganizes Edar expression, it balances focal up-regulation with widespread down-regulation such that the total level of transcript is conserved. These findings raise the possibility that follicle patterning is controlled by restriction of competence to respond to Eda, with nascent follicles blocking Edar expression, and therefore follicle fate, in their surroundings. If restriction of Edar to follicles and its down-regulation in their surrounding cells are key events in pattern formation, then WT skin that already contains follicle primordia should be less competent than Eda mutant skin to produce follicles in response to exogenous Eda. We cultured WT and Eda −/− E14 dorsal skin in various concentrations of recombinant Eda and assessed hair follicle density by detecting Shh (sonic hedgehog), an early hair follicle marker (18). WT skin contains ≈30 follicles per square millimeter in the absence of exogenous Eda, rising to 50 follicles per square millimeter at high Eda concentrations. Eda −/− skin is much more responsive to Eda, generating a maximum of 90 follicles per square millimeter (Fig. 1 C and D). At high concentrations of Eda, mutant skin generated stripe-like patterns, which were never observed in WT (Fig. 1 C). Stripe formation is predicted in activation–inhibition systems when activator concentrations become saturating (19). We noticed that treated mutant, but not WT, explants had follicle primordia aligned along their edges (Fig. 1 C). This observation could be explained if cells along the margin of the tissue have an advantage in forming a follicle by being relieved of inhibitory factors from cells on one side. This edge effect, and the fact that Eda −/− skin can generate nearly twice as many follicles as WT, argues that final follicle locations are not predetermined in Eda mutants. These results suggest that application of exogenous Eda in this culture system initiates pattern formation, rather than simply revealing a preexisting, cryptic pattern. Taken together, these results link widespread expression of Edar with widespread competence to form a hair follicle and indicate that existing follicles cause restriction of this developmental potential. Timing of Patterning and Morphogenetic Events. To examine the rate of pattern formation and follicle morphogenesis we cultured Eda −/− skin with Eda and fixed samples at various time points. We found an ordered pattern of Edar-expressing foci appearing ≈10 h after Eda administration, after which the spots resolved and intensified to 24 h (Fig. 1 E). The first definitive morphological indication of follicle formation, generation of a condensed placode, did not become visible until 20 h after Eda application (Fig. 1 F). Thus, in this system a molecular prepattern precedes the appearance of morphologically identifiable placodes by ≈10 h. Eda Is Dispensable for Pattern Formation. The transgenic line OVE951 carries a high copy number of a yeast artificial chromosome that includes the entire Edar locus (20) and consequently overexpresses Edar in its endogenous pattern (10). We quantified Edar expression in transgenic skin at E14, finding it to be 4-fold higher than in nontransgenic skin (Fig. 2 A). We found that introducing this Edar-overexpressing locus into the Eda −/− line leads to rescue of primary follicle formation, as determined by Shh expression (Fig. 2 B–D). This finding shows that moderate Edar overexpression leads to ligand-independent signaling and illustrates that an accurate follicle pattern can be generated in the absence of Eda. Eda is dispensable for pattern formation. (A) qPCR determination of Edar expression in E14 Edaradd −/− nontransgenic (NT) and Edaradd −/− OVE951 skin. (B--D) In situ detection of Shh in WT (B), Eda −/Y (C), and Eda −/Y (D) OVE951 E15 embryos. Expression is detected only in the vibrissae of the mutant, but overexpression of Edar rescues the mutant phenotype, generating an accurate follicle pattern. BMP Signaling Inhibits Edar Expression. The finding that Edar expression is undetectable in cells close to nascent follicles, whereas more distant cells express moderate levels of Edar, suggests that early hair follicles produce a diffusible inhibitor of Edar expression that restricts competence to assume this fate in surrounding cells. Two secreted ligands have been described as inhibitors of follicle formation: the BMPs (21, 22) and EGF (23). Both of these molecules block Eda-mediated follicle formation in culture (Fig. 3 A), and so both are candidates for the Edar-repressing activity. We tested these molecules for inhibition of Edar expression in embryonic skin cultures. Importantly, the Edar inhibitor should act before commitment to follicle fate, and so it should repress the basal Edar expression observed before patterning. The up-regulated Edar expression observed in follicle primordia may be under a distinct regulatory control. Because mutant skin exhibits only the widespread, moderate expression of Edar (Fig. 1 A), we used it for these experiments. We found that the BMPs strongly repressed Edar expression, whereas EGF did not (Fig. 3 B). BMP repression of Edar is rapid and correlates with levels of phospho-Smad1/5/8, the activated form of intracellular transducers of BMP signals (Fig. 3 C). BMPs inhibit Edar expression. (A) BMP4 and EGF inhibit follicle formation induced in Eda −/− skin by Eda. (B) Quantitation of Edar expression in E13 Edaradd −/− skin treated for 24 h with BMPs or EGF. (C) Time course inhibition of Edar by BMP4 and corresponding phospho-Smad levels. (D) Shh expression in Eda −/− skin treated with Noggin only, Eda only, or Eda plus Noggin for 24 h. (E) Hair follicle densities in Eda −/− skin treated with Eda with or without Noggin for 24 h. Error bars show SEM. To determine whether endogenous BMPs influence pattern formation we used Noggin, a specific inhibitor of BMPs 2, 4, and 7 (24). The maximum hair follicle density achieved by Eda treatment of Eda −/− skin was ≈90 per square millimeter (Fig. 1 D). Cotreatment with Noggin breached this limit, allowing formation of 140 follicles per square millimeter (Fig. 3 D and E), without generation of stripes. This result demonstrates that endogenous BMPs restrict Eda responsiveness during pattern formation. Treatment with Noggin alone caused formation of small clusters of follicles in mutant skin, indicating that relief from BMP signaling is sufficient to allow some sporadic follicle formation in the absence of Edar activity (Fig. 3 D). BMPs Act at a Distance from the Follicle. Because the follicle itself produces BMPs (25) and has high Edar expression, it must employ a mechanism to evade BMP-driven Edar down-regulation. We found that connective tissue growth factor (CTGF), which binds to and inhibits BMPs in a manner analogous to that of Noggin (26), is expressed in hair follicle placodes (Fig. 4 A) and is a rapidly up-regulated target of Edar signaling (Fig. 4 B). In contrast to CTGF, other inhibitors of BMP signaling expressed in developing skin [Noggin, Smad7, and Sostdc1 (sclerostin domain-containing 1)/Ectodin/WISE (25, 27)] are themselves transcriptional targets of BMP (Fig. 4 C), likely acting as feedback inhibitors of the signaling pathway. Consistent with the idea that the placode is a BMP-privileged zone, the BMP target Sostdc1 is expressed surrounding and away from, but not within, follicle sources of BMP (Fig. 4 D), and phospho-Smad1/5/8 is detected in E15 interfollicular epidermis but is largely absent from nascent follicles (Fig. 4 E). Cotreatment of skin cultures with Eda and BMP represses BMP induction of its target gene Smad7, with this weakening of BMP transcriptional responses accompanied by suppression of Smad phosphorylation (Fig. 4 F). Eda’s ability to inhibit BMP responses relies on activation of NF-κB, because pharmacological suppression of this transcription factor allows full BMP response in the presence of Eda (Fig. 4 G and H). This finding is consistent with a role for Eda target genes in BMP inhibition rather than any direct interference between components of the Eda and BMP signal transduction pathways. Thus, the BMPs act at a distance from their site of synthesis, and the early follicle itself is resistant to their action. Taken together, these experiments show that the BMPs display the characteristics of the inhibitory arm of an activation–inhibition loop. BMPs act at a distance from the nascent follicle. (A) CTGF is expressed in hair follicle primordia in E15 WT, but not Eda −/−, embryos. Expression is also seen in the eyelid. (Scale bar: 1 mm.) (B) Quantitation of CTGF mRNA in separated epidermis and dermis of mutant skin treated with 1,000 ng/ml Eda for 4 h. Edar stimulation induces CTGF expression in the epidermis, with little dermal expression observed. (C) Quantitation of gene expression in epidermis 5 h after addition of BMP4 to whole skin. Noggin, Smad7, and Sostdc1 are induced by BMP treatment, whereas CTGF is not. (D) Sostdc1 is expressed surrounding and away from follicle primordia 24 h after administration of Eda. (Scale bar: 100 μm.) (E) Immunodetection of phospho-Smad1/5/8 in the epidermis of E15 WT skin, with low levels in follicle placodes (arrowhead). (Scale bar: 100 μm.) (F) Smad7 expression in epidermis 5 h after BMP4 with or without Eda application and corresponding epidermal Smad1/5/8 phosphorylation levels. Cotreatment with Eda suppresses BMP-induced Smad7 activation and Smad1/5/8 phosphorylation. (G) Smad7 expression in isolated epidermis 5 h after BMP4 with or without Eda application in the presence of the NF-κB inhibitor BAY 11-7082. (H) Bay 11-7082 blocks Eda-mediated IκBα phosphorylation in the epidermis. Error bars show SEM. Transcriptional Responses to Edar Signaling. If Edar functions in the activation arm of this loop, then it should be able to up-regulate its own expression, as well as that of the BMPs. To identify Edar signaling targets we treated Eda −/− cultures with Eda and analyzed gene expression in isolated epidermis and dermis. We used a high dose of Eda to achieve gross up-regulation of target gene expression rather than the relocation of expression observed during the physiological patterning process. Edar expression was modestly activated by Eda within 4 h, with suppression of BMP signaling by cotreatment with Noggin enhancing this autoregulation (Fig. 5 A). Analysis of BMP expression at 4 and 10 h found no significant changes in BMP2 levels (data not shown), whereas BMP4 was strongly activated in the dermis by 10 h (Fig. 5 B). BMP7 displayed the most rapid response to Eda, with initially low dermal levels up-regulated within 4 h (Fig. 5 A) and strongly up-regulated at 10 h (Fig. 5 B). The Eda-induced BMPs were focally expressed (Fig. 5 B). Because Edar expression is restricted to the epidermis (Fig. 5 A), dermal up-regulation of BMP4 and BMP7 must be an indirect effect. Interestingly, in addition to increasing overall BMP levels, up-regulation of BMP7 in the dermal compartment would be predicted to enable formation of BMP4/7 heterodimers, which have been shown to be much more potent signals than BMP homodimers (28). Timing of transcriptional events, patterning model, and Edar hyperactivation. (A) Eda −/− mutant explants were cultured with or without Eda with or without Noggin for 4 h, and expression of Edar, BMP4, and BMP7 was determined in separated epidermis and dermis. (B) BMP expression levels and location in mutant skin cultured with Eda for 10 h. The induced expression of BMP4 and BMP7 is punctate. (C) Proposed molecular interactions that generate the primary hair follicle pattern. Solid lines indicate cell-autonomous local interactions, and dotted lines indicate action at a distance. (D) Keratin14 immunostained longitudinal sections, and hematoxylin and eosin stained cross-sectioned dorsal skin of 10-day-old K14::LMP1-Edar transgenic and nontransgenic littermates. (E) Quantitation of hair follicle density in transgenic and nontransgenic mouse dorsal skin. Error bars show SEM. Based on these findings we propose a model for primary hair follicle patterning (Fig. 5 C) in which the naïve embryonic epidermis evenly expresses molecules that activate (Eda and Edar) and inhibit (BMP) hair follicle identity. Edar undergoes local autoregulation and signal amplification, induces CTGF, and indirectly up-regulates BMP expression in the dermis. Local inhibition of BMP signaling forces their action at a distance to repress epidermal Edar and hence follicle fate. These interactions serve to amplify deviations in the initial conditions to generate a spatially organized follicle array. We sought to incorporate β-catenin into this model because its activation is essential for follicle patterning and morphogenesis (29). We analyzed expression of Axin2, a direct β-catenin target gene (30), and found an ordered array of Axin2-positive foci in WT E15 epidermis. Eda −/− skin had occasional clusters of these Axin2 foci, suggesting that some punctate β-catenin activity is present in the absence of Edar signaling. In culture these foci were suppressed by BMP and enhanced or stabilized by Eda. Thus, some prepatterned β-catenin activity appears independent of Edar function. This observation apparently contradicts our model in which Edar function regulates patterning decisions. One possibility is that these Axin2 foci are proposed follicle locations that have the potential to be stabilized by Edar. To determine when a final follicle pattern becomes fixed we cut cultured skin at different times after Eda application and looked for alignment of follicles along the newly generated edge. Edge effects were observed when skin had been exposed to Eda for <10 h, whereas after this time the ability of the pattern to be reconfigured in response to perturbation of the field is lost (these data are in Fig. 6, which is published as supporting information on the PNAS web site). These findings indicate that a labile prepattern exists in the absence of Edar signaling but that it takes >10 h of molecular negotiation in the presence of Eda to fix a definitive pattern. The proposed model considers restriction of Edar activity as a pivotal event in patterning. A prediction that it makes, therefore, is that widespread Edar activation should lead to widespread assumption of hair follicle fate. We generated transgenic mice expressing a cDNA composed of the intracellular domain of Edar fused to the transmembrane domains of LMP1, a viral protein that confers ligand-independent signaling when fused to cell-surface receptors (31). This cDNA was expressed in the basal layer of the epidermis by using the Keratin14 promoter. Three independent founder mice displayed thickened, scaly skin and because of ill health had to be killed within 20 days of birth. Sectioning revealed that the skin of these animals was consumed with hair follicle down-growths, the follicles packed against one another with essentially no intervening spaces (Fig. 5 D). Quantitation of follicles in transgenic dorsal skin showed that it has a density ≈40% greater than that of nontransgenic littermates (Fig. 5 E). This generation of supernumerary follicles confirms the importance of restricting Edar signaling in generation of an appropriately patterned hair follicle array. We propose a receptivity-driven model for hair follicle formation in which regulation of Edar expression is pivotal. Other activation–inhibition systems that have been studied at a molecular level, such as determination of the branched feather structure (32) or vertebrate left–right asymmetry (33), rely on differential diffusion properties of two secreted ligands. In contrast, modulation of a receiving cell’s responsiveness to a widely available signal is central to our model. Such patterning mechanisms rely on a differential range of activating and inhibitory molecules. Edar and β-catenin are restricted to the cell in which they are produced, whereas CTGF and the BMPs are secreted molecules. This property suggests that CTGF action must be spatially restricted. Restriction could be achieved by CTGF immobilization on extracellular matrix components or by diffusion of CTGF–BMP complexes with subsequent release of active BMPs. The observation that mutant and WT embryonic skin have the same levels of Edar expression (Fig. 1 B) is best explained by the fact that, although Eda up-regulates Edar expression, it also induces BMPs, which feed back to inhibit Edar. The culture method we used allows synchronization of follicle formation and makes the skin accessible to experimental manipulations. However, it is important that observations from such ex vivo systems are correlated with the findings made in intact animals. In particular, application of recombinant proteins mimics transgenic gain of function approaches, whereas loss-of-function experiments are essential to understanding endogenous functions. In whole animals ablation of Edar signaling specifically blocks primary hair follicle formation (10), whereas suppression of β-catenin activation prevents formation of all follicle types (29). Consistent with an inhibitory role for BMPs in the patterning process, deletion of BMP receptor genes in embryonic epidermis causes an increase in follicle density by the end of the primary wave of follicle formation at E16 (34). In addition, deletion of Noggin, presumably leading to enhanced BMP signaling, reduces hair follicle numbers. However, Noggin mutation specifically ablates secondary follicles while allowing primary follicles to form (14). This mutant phenotype may indicate that Noggin is the chief BMP inhibitor used by secondary follicles to avoid BMP autostimulation, while primary hair follicles instead employ CTGF. CTGF-null mice display skeletal abnormalities and die at birth (35), but their skin phenotype has not been described. In our experiments, manipulation of signaling activities enabled modulation of placode densities over a wide range, from 30 per square millimeter to 140 per square millimeter. This plasticity, as well as the alignment of follicles along the boundary of dissected skin explants, indicates that their ultimate locations have not been defined in Eda mutant skin. However, we did find sporadic Axin2 expression, indicative of patterned β-catenin activity, in the absence of Eda. These foci may be the same cells that were recently identified as initiating, but failing to maintain, very early follicle morphogenesis in Edar mutant skin (36). The malleability of follicle position in response to experimental perturbation suggests that this prepattern does not necessarily represent the final hair follicle array. Axin2-expressing foci might be “proposed” follicle locations that become fixed or otherwise as Edar function is restricted in the skin. Although the relationship between β-catenin and Edar remains to be fully elucidated, one link between these signaling modules is the finding that BMP inhibits formation of Axin2 foci, whereas Edar suppresses BMP responses. Thus, Edar could enhance β-catenin function indirectly by shielding it from BMP action. The placodes induced by Noggin in Eda mutant skin (Fig. 3 D) may represent such a stabilization of Axin2 foci. The up-regulation of Edar observed in early placodes is likely to influence its signaling properties, as illustrated by its overexpression under control of native regulatory elements in the OVE951 line. The ability of moderately elevated receptor levels to compensate for Eda deficiency suggests that receptor up-regulation confers ligand-independent signaling. Thus, the autoactivation of Edar expression that normally occurs in early placodes may be sufficient to allow Eda-independent signal transduction, helping establish commitment to a follicle fate. Our model predicts that ectopic follicles are not produced in this case, despite autonomous Edar signaling, because its expression remains susceptible to BMP inhibition. The K14:LMP1-Edar line was engineered to have ligand-independent signaling but produced a much more dramatic phenotype of ectopic follicle formation. This observation is in accordance with our model because the promoter driving expression is not susceptible to down-regulation by BMPs. The skin of this transgenic line is essentially unpatterned in the sense that follicles simply pack all available space rather than spatially regulating their locations. The Edar-induced phenotype contrasts with the effects of widespread transgenic activation of β-catenin, which causes growth of new follicles only in adult mouse skin, but does not lead to formation of ectopic follicles during the embryonic period (37). This finding suggests that restriction of β-catenin activation is not limiting in defining hair placode locations, although its activity is clearly necessary for follicle formation. In this work, we provide a framework onto which to build other factors involved in hair follicle development as their relationships to the Edar and BMP pathways are uncovered. More broadly, it is likely that variations on this molecular network underlie pattern formation in scale and feather development and generation of tooth morphology. In addition, our findings of epidermal–dermal communication from the earliest stages of patterning indicate a decisive role for both tissues and contradict the simple view that positional information is first generated in the dermis and then conveyed to a passive epidermis. WT, EdaTa /Ta, and Edaraddcr /cr lines were on the FVB/N background. OVE951 transgenic animals were used to detect Edar by in situ hybridization. Eda is on the X chromosome; for brevity Eda −/− is used in the text to refer to female Eda −/− and male Eda −/Y animals. For timed matings the day on which a vaginal plug was detected was counted as day 0. K14:LMP1-Edar transgenic mice were generated as described (38). Samples were fixed in 4% paraformaldehyde in PBS overnight at 4°C. Hybridization was performed as described (39). In situ hybridizations were photographed, and follicle density was determined by counting the number of Shh-expressing foci in a square of side 1 mm. Data from at least three independent skin explant cultures were used for each follicle density determination. Skin edges were not included in the analysis. Stripes were included in counts as a single follicle. RNA was reverse-transcribed by using random primers and AMV reverse transcriptase (Roche) in a 20-μl reaction. Reactions were diluted 10-fold, and 5 μl was used as template for each qPCR. TaqMan probes were supplied by Applied Biosystems. The probes used were as follows: βActin (4352341E), BMP2 (Mm01962382_s1), BMP4 (Mm00432087_m1), BMP7 (Mm00432102_m1), CTGF (Mm00515790_g1), Edar (Mm00839685_m1), Keratin14 (Mm00516876_m1), Noggin (Mm00476456_s1), Smad7 (Mm00484741_m1), and Sostdc1 (Mm00840254_m1). Twenty-microliter reactions were performed in triplicate by using an OpticonII thermocycler, with at least three biological replicates used to determine each data point. We did not observe changes in the total amount of βActin expression across the different experimental treatments. For each experiment control and treated samples came from the same litter. Relative or absolute amounts of normalizer and test transcripts were calculated from a standard curve. Skin Organ Culture and Treatments. Dorsal skin was dissected, placed onto an MF-Millipore filter on a metal grid, and submerged in DMEM plus 5% FBS in a center-well dish (Falcon) at 37°C and 5% CO2. Epidermal–dermal separations were performed by incubating skin samples at 37°C for 10 min with 2 mg/ml dispase (GIBCO). Tissues were homogenized in TRI reagent (Sigma) to isolate total RNA and proteins. Recombinant EdaA1 (17) was used at 50 ng/ml for in situ hybridizations and histology and at 1,000 ng/ml for analysis of transcriptional targets by qPCR. Recombinant BMPs and EGF were used at 500 ng/ml, and Noggin was used at 1,000 ng/ml. Human BMP2, human BMP4, human BMP7, mouse EGF, and mouse Noggin proteins were from R & D Systems. For experiments involving cotreatment with Noggin the cultures were pretreated with Noggin for 2 h before the addition of Eda. BAY 11-7082 (Calbiochem) was used at 20 μM. Samples were fixed in 4% paraformaldehyde in PBS at 4°C overnight and then dehydrated and embedded in paraffin wax. Six-micrometer sections were stained with hematoxylin and eosin, 1/1,000 FITC-conjugated anti-Keratin14 (Covance), or 1/100 rabbit anti-phospho-Smad1/5/8 (Cell Signaling Technology). Rabbit primary antibody was detected by using 1/200 biotinylated goat anti-rabbit (Upstate Biotechnology) and ABC peroxidase (Vector Laboratories). Protein samples were run on a 12% SDS/polyacrylamide gel and transferred to a nitrocellulose membrane. Blots were blocked in 5% skimmed milk in TBS/0.1% Tween 20 for 1 h and then probed with primary antibody [1/25,000 mouse monoclonal anti-βActin-horseradish peroxidase AC-15 (Sigma), 1/1,000 anti-phospho-Smad1/5/8, and 1/2,000 mouse anti-phospho-IκBα 5A5 (Cell Signaling Technology)] in TBS/0.1% Tween 20 overnight. Signal was detected by using horseradish peroxidase-conjugated secondary antibodies and chemiluminescent substrate. We thank C. M. Chuong, M. Dixon, D. Garrod, E. Harris, A. Hurlstone, H. Meinhardt, C. Thompson, and R. Widelitz. This work was supported by Wellcome Trust Grant 075220/Z. Author contributions: C.M., P.S., P.A.O., and D.J.H. designed research; C.M., B.J., and D.J.H. performed research; P.S. contributed new reagents/analytic tools; C.M. and D.J.H. analyzed data; and C.M. and D.J.H. wrote the paper. (2002) Proc. Natl. Acad. Sci. USA 99:8116–8120. (1976) Morphogenesis of Skin (Cambridge Univ. Press, Cambridge, U.K.). (1990) Int. J. Dev. Biol 34:33–50. (2002) J. Invest. Dermatol 118:216–225. (2002) Development (Cambridge, U.K.) 129:2541–2553. (2002) J. Invest. Dermatol 118:3–10. (2004) Development (Cambridge, U.K.) 131:4907–4919. (1989) Development (Cambridge 107(Suppl.) 169–180. (1998) Development (Cambridge, U.K.) 125:3775–3787. (2002) Nat. Cell Biol 4:599–604. (1995) Biochem. Biophys. Res. Commun 210:670–677. (2002) Mol. Cell. Biol 22:1172–1183. (2005) Proc. Natl. Acad. Sci. USA 102:11734–11739. (2004) Development (Cambridge, U.K.) 131:2257–2268. (2003) Development (Cambridge, U.K.) 130:2779–2791. (2006) Development (Cambridge, U.K.) 133:1045–1057. (1994) Development (Cambridge, U.K.) 120:2369–2383.
2019-04-21T00:37:55Z
https://www.pnas.org/content/103/24/9075?ijkey=031ad1d1179c4445eacc7f5caf9f5af65541eef5&keytype2=tf_ipsecsha
American Mineralogist (Am Min) offers many special themed article collections virtually. Articles that fall within the Special collection are published exactly as any other paper, grouped together by issue, but still published in a timely manner, independent from other articles in the collection. Special collection papers are identified by headings at the top of the article and on the Table of Contents. Click here for more information about organizing or writing for a special collection. Submit your paper as any regular paper, but choose the correct Special Section from the menu -- and follow all the normal instructions for authors. Author info and journal info available at the Am Min website and papers are submitted via the web at our online submission site. When the first paper in a collection is published, a link will be here to view the collection. If you wish to have a hard copy of any particular collection, visit MinPubs.org. Hint: hover your mouse over the collection name to view a brief description. This collection on mineralogy and the nuclear industry encompasses Cold War legacy issues, specifically the transport of actinides in the subsurface and waste forms for actinides. New research concerning ore deposit genesis, both Th and U, is welcome. They welcome a broad scope! The special collection Associate Editors for these papers are Peter C. Burns and Julien Mercadier. Contact them for information about submitting new papers. We announce the solicitation of papers on the broad spectrum of mineralogical, petrological, and geochemical aspects of ultrahigh-pressure metamorphism in crustal rocks. Topics of interest include aspects of UHP mineral nano- and microstructure, crystallography, fluid and melt inclusions; petrology and geochemistry related to UHP topics; and geochronological studies. Papers that present theoretical, analytical or conceptual advances toward the understanding of UHP metamorphism are particularly encouraged. The special collection Associate Editors for these papers are Jane A. Gilotti, Daniela Rubatto, and Hans-Peter Schertl. Contact them for information about submitting new papers. The special collection Associate Editors for these papers are Grant S. Henderson and Daniel R. Neuville. Contact them for information about submitting new papers. Residing at the intersection of the biological, geological, and material science realms, the topic of apatite is highly diverse and interdisciplinary. Apatite group minerals are the dominant phosphates in the geosphere and biosphere. They are found in virtually all rock types as the principal sink for P and F and in many cases the (Y+REE). They form the major mineral component in vertebrate bones and are the base of the global phosphorous cycle. U-Th-Pb isotopic chemistry in apatite has lead to their broad application in geochronology. Lastly, the physical and chemical properties of apatite group minerals makes them ideal for many technological applications including phosphors, lasers, prosthetics, ceramics, metal sequestration agents, and potential solid nuclear waste forms. The special collection Associate Editors for these papers are Dan Harlov and John Hughes. Contact them for information about submitting new papers. The growth of crystals in rocks often leads to imperfections in the crystal in the form of fluid, melt, or mineral inclusions. Geological fluids rising from the mantle to the crust acquire, transport, degas, and deposit different elements in igneous, metamorphic, and sedimentary rocks. Numerous studies over the past half-century have described fluid and melt inclusions as the best repositories to investigate changes in inclusion properties and track the evolution of these fluids through time. Recently there has been a growing application of mineral inclusions in rigid hosts to constrain pressures and temperatures of porphyroblast growth. This special section aims to bring together researchers that focus their studies on the application of fluid, mineral, and melt inclusions to understand the nature and timescale of geological processes in different geodynamic environments. Multidisciplinary approaches that combine natural observations, structural and/or deformation paths, laboratory experiments, and theoretical and thermodynamic models are particularly encouraged. The special collection Associate Editor for these papers is Kyle T. Ashley. Contact Dr. Ashley for information about submitting new papers. Mineralogists are increasingly involved in research into biomaterials. By definition, biomaterials are nonviable materials used in medicine, in particular for medical devices such as endoprosthetic hip or small joint implants or dental roots, intended to interact positively with biological systems such as the human body. This special section acknowledges that mineralogists/materials scientists have much to contribute to this growing field. Mineralogical aspects of biomaterials are of wide interest among mineralogists working in the area of biogenic minerals such as calcite and aragonite. However, they should also interest materials scientists/ceramists involved in the science and technology of bioceramics, both structural materials such as alumina, zirconia and titania, and functional, in particular, osseoconductive materials such as calcium phosphates, especially apatites. The special collection Associate Editor for these papers is Robert B. Heimann. Contact him for information about submitting new papers. Building Planets: The dynamics and geochemistry of core formation aims to combine cutting edge experimental and modeling results with review articles defining the state of the science and current challenges to our understanding of the origin, geophysics and geochemistry of planetary cores. Our goal is to highlight novel and interdisciplinary approaches that address aspects of core formation and evolution at the atomic, grain, and planetary scales. The Associate Editors for this special section are Tracy Rushmer and Heather Watson. Contact them for information about submitting new papers. Our Special Section in American Mineralogist, titled Chemistry and Mineralogy of Earth's Mantle, will ultimately provide a collection of papers relating to the chemical composition and properties of both the upper and lower mantle. Submissions may be experimental or theoretical in nature, and topics may include concentrations of major and minor elements, bulk mineralogy, phase partitioning, diffusion, and the influence of minor elements on properties such as density, bulk modulus, shear modulus, seismic velocities, anisotropy, and thermal/electrical properties. We are especially interested in papers relating to the incorporation and behavior of minor elements and volatiles into mantle phases, and their impact on rheological properties. The special collection Associate Editors for these papers are Daniel Hummer and Katherine Crispin. Contact them for information about submitting new papers. The aim of this special section is to shed new light on the complexity of magma chamber dynamics processes with a focus on the role of magma mixing and the meaning of measured timescales of magmatic processes. We welcome contributions on the following topics: i) experimental, analogue, geochemical and/or numerical modeling of magma mixing; ii) micro-analytical investigations of physical and chemical disequilibrium in minerals and between minerals and melt; iii) diffusion modeling in melts/minerals and timescale estimation using both elemental diffusion modeling and radiogenic isotopes; iv) analytical, experimental and computational approaches leading to new insights on the timescale of magmatic processes, magma ascent and eruption. An interest in the combination of one or more approaches is encouraged. The special collection Associate Editors for these papers are Chiara Maria Petrone and Maurizio Petrelli. Contact them for information about submitting new papers. Over the last decade, several steps have been made toward the reconstruction of martian mineralogy, geochemistry, geomorphology, and geology. In situ exploration by rovers, combined with remote sensing and analogue studies, has enabled significant advancement in our understanding of martian chemistry and mineralogy. Terrestrial case studies play an important role in observing geological processes that may have taken place on Mars. This session includes analysis of sites that are consistent with current and former martian environments, as well as sites that may replicate specific chemical, mineralogical, or physical geologic processes that are thought to have taken place on Mars. The present session aims to become a roundtable for Earth and planetary scientists studying terrestrial analogs as case studies of martian geology at all scales. We especially welcome contributions from multidisciplinary approaches, combined field and Mars data analysis studies, and investigations using novel techniques. The special collection Associate Editors for these papers are Janice Bishop, Javier Cuadros, Christian Mavris and Pablo Sobron. Contact them for information about submitting new papers. How do phase transitions and chemical reactions govern the transformation and movement of carbon in Earth? The special collection “Earth in five reactions - A deep carbon perspective” features review articles that use reactions as threads to weave disparate findings into coherent pictures and offer new insights into the role of carbon in Earth's dynamics and evolution. These integrative studies aim to identify gaps in our current understanding and establish new frontiers to motivate and guide future research in deep carbon. Also included are new experimental and theoretical investigations of reactions involving carbon in different host phases, variable valence states, under a wide range of pressure and temperature conditions, and over a vast span of spatial and temporal scales, with the goal of elucidating the mechanisms and kinetics of key reactions that influence Earth's deep carbon cycle. The special collection Associate Editors for these papers are Jie "Jacky" Li and Simon Redfern. Contact them for information about submitting new papers. Fluids in the Crust is dedicated to high-temperature fluids and fluid-rock interactions. The collection includes analytical, experimental and modeling approaches. The fields of hydrothermal aqueous geochemistry, economic geology, metamorphic geology, igneous petrology and experimental petrology may all fall under the purview of this collection. Studies of rocks and fluids from diverse geologic settings are welcomed, including mid-ocean ridges, subducted slabs, orogenic belts, fumaroles, geothermal fields, and hydrothermal ore deposits. The special collection Associate Editors for these papers are Dionysis Foustoukos and Sarah Penniston-Dorland. Contact them for information about submitting new papers. This collection brings together expertise from the economic geology and igneous petrology communities to track the processes that concentrate volatiles and ore metals from the depths of magmatic systems up through the magmatic-hydrothermal transition and into the ore zone. Interdisciplinary investigation of this complex realm is vital to understand ore deposition and guide exploration. We aim to include any and all aspects of magmatic and magmatic-hydrothermal ore deposition, including but not limited to: (1) processes of magma development, (2) source and partitioning of ore metals and volatiles during magma evolution, (3) mechanisms of ore deposition from magmas and/or exsolved magmatic fluids, and (4) optimal conditions for deposition of high-grade deposits versus barren systems (e.g., tectonic setting, lithospheric history, influence of crustal processes, magma/fluid flux, pressure, temperature, oxygen fugacity, sulfur fugacity, pH, salinity, etc.). The special collection Associate Editors for these papers are Celestine Mercer and Julie Roberge. Contact them for information about submitting new papers. This special collection aims to showcase a broad range of research on the geology of the region surrounding Lassen volcano in the southern Cascades, in celebration of the 100th anniversary of the 1916 founding of Lassen Volcanic National Park. Submissions from a wide range of fields including volcanology, petrology, geochemistry, mineralogy, geochronology, geophysics, geobiology, and other related fields are encouraged. The special collection Associate Editors are Lindsay McHenry and Michael Clynne. Contact them for information about submitting new papers. At the 2013 Goldschmidt conference held in Florence (August 25-30th), members of the panel on Glasses, Melts, and Fluids as Tools for Understanding Volcanic Processes and Hazards are invited to submit their papers to a special American Mineralogist Collection. This Special Collection aims to bring together studies on natural systems, experimental activities, and thermodynamic modeling, aimed at advancing our understanding of important issues in petrology and chemical volcanology, such as (i) equilibrium vs. disequilibrium degassing processes, (ii) the interplay of magma degassing (including fluid infiltration) and crystallization; (iii) the timing of magmatic processes, (iv) the redox response of magmas in pre-eruptive and syn-eruptive processes, and (v) the link between glass chemical inhomogeneities and magma properties. The special collection Associate Editors for these papers are Claudia Cannatelli, Roberto Moretti, Rosario Esposito, and Nicole Metrich. Contact them for information about submitting new papers. The special collection Associate Editors for these papers are Anita Cadoux. Contact them for information about submitting new papers. This is a collection of papers focused on the nature and timing of processes that form granite magmas, the processes that connect these magmas at their source region with high-level granitic intrusions or their volcanic equivalents, the role of the Earth's mantle in the genesis of granite magmas, and the implications for crustal growth and crustal differentiation. The collection is particularly intended to present the new approaches and techniques currently used to advance in the knowledge of these fundamental topics. Dr. Acosta-Vigil will also rely on the advice and reviewer expertise of Richard White, who will join him as an associate editor when the workload rises. The special collection Associate Editor for these papers is Antonio Acosta-Vigil, with the advice and reviewer expertise of Richard White, as needed. Contact him for information about submitting new papers. This collection will be a thematic collection of papers based on a session at AGU 2012. This collection will incorporate exciting new research that combines the most recent advances deciphering vesicles, crystal and melt behavior in eruption-forming magmas. The special collection Associate Editors for these papers are Thomas Shea, Jessica Larsen, and Julia Hammer. Contact them for information about submitting new papers. This collection will be a thematic group of papers based in part on a session at the 2017 annual meeting of GSA: "Celebrating Dr. John W. Valley's Contributions to Isotope Geochemistry and Beyond, from the Hadean to the Holocene". We seek contributions in honor of the career of John Valley, who has advanced applications of isotope geochemistry in igneous, metamorphic, and sedimentary petrology, planetary science, paleoclimatology, gemology, astrobiology, and other disciplines. The special collection Associate Editors for these papers are Jade Star Lackey and Aaron Cavosie. Contact them for information about submitting new papers. Since their nuclei are fragile and easily destroyed in stars, Li, Be, and B are three of the least abundant elements lighter than Fe in the solar system, 5 to 7 orders of magnitude less abundant than C, N, and O. Yet the processes leading to the formation of continental crust, such as the subduction cycle, have led to localized enrichments sufficient to saturate such systems with minerals where Li, Be, and B are essential structural constituents. Study of such minerals can thus enhance our understanding of crust formation and elucidate the fate of subducted crust. Lithium and B each have two naturally occurring isotopes that further add to the usefulness of Li and B minerals as tracers. Lithium and B also occur in two crystallographic coordinations with oxygen, and the contrast between them may allow us to distinguish different pressure-temperature regimes. All three elements have also found wide economic applications, and several of the minerals themselves are valuable as gemstones. For this special section, we envision a very broad scope, and we will consider manuscripts that touch upon any of these facets of light element mineralogy-petrology-geochemistry. The special collection Associate Editor for these papers is Edward S. Grew. Contact Dr. Grew for information about submitting new papers. From the formation of large igneous provinces with their impact on climate and life to the eruption of modern ocean islands, thermochemical mantle upwellings known as plumes have played a fundamental role in the evolution of our planet. Over billions of years, plumes have moderated the heat and material fluxes through the mantle, and created land that has been accreted to continents. As plumes are thought to rise from great depths, plume-fed volcanism offers the outstanding opportunity to study deep-mantle composition and evolution. However, decades after the existence of plumes was first proposed, a comprehensive understanding of these important dynamical features as well as detailed illumination by geophysical methods remains elusive. The goal of this special volume is to bring together contributions from different disciplines in order to evaluate the sources, dynamics and evolution of mantle plumes, plume-lithosphere interaction, melt generation, and volcanism. We welcome submissions from geochemistry, petrology, mineralogy, geodynamics and seismology, and other fields of geophysics. The special collection Associate Editors for these papers are Esteban Gazel and Maxim Ballmer. Contact them for information about submitting new papers. This collection will include papers dealing with observations made by the MER or MSL rovers, orbital multispectral or hyperspectral data, and results based on new studies of martian meteorites. The current focus of much of this work is on understanding the influence of water in past environments and the rock types and geologic record contained in rocks and minerals at the martian surface. The special collection Associate Editors for these papers are Brad Jolliff, David J. Des Marais, and Bill Farrand. Please contact Brad Jolliff if you would like to contribute a paper. We announce a special collection of American Mineralogist based on recent and forthcoming conference sessions on kinetically and transport controlled geochemical processes in the middle and lower crust and mantle. Currently the element fluxes and timescales of atmospheric, sedimentary, and volcanic processes can be measured, but similar knowledge of mid-crustal to mantle processes remains elusive. We solicit papers on the description and quantitative interpretation of spatial patterns in mineral occurrence (especially metasomatic zoning), in mineral texture, and in the chemical and isotopic compositions of minerals that constrain the mechanisms (e.g., diffusion vs. advection), rates, and timescales of the controlling phenomena. Contributions from field and experimental studies, as well as theory and modeling, are welcome. Please use the subject area tag Geochemical Transport when submitting your paper via the online submission site and look for the special collection name in the drop down list. The Associate Editors for this special section are Thomas Mueller, Ralf Milke, and John Ferry. Contact them for information about submitting new papers. This special collection, devoted to various topics relating to minerals in the human body. This collection sets out to examine the interaction, formation, and alteration of minerals in the human body. Past publications in this journal have mainly dealt with characterization of potentially asbestiform minerals, but others (e.g., Guthrie 1992; Norton and Gunter 1999; and Pasteris et al. 1999) have examined broader issues. It's our hope to show how mineralogists, petrologists, and geochemists can aid in this area, while including papers from others outside of the geosciences involved with this issue (e.g., medical researchers and those working in the regulatory fields). The special collection Associate Editors for these papers are Mickey Gunter and Gregory Meeker. Contact them for information about submitting new papers. Microporous materials are a class of compounds with open-framework structures, mainly represented by zeolites, feldspathoids and materials with heteropolyhedral frameworks. Many of these structures hold a variety of cations and molecules within structural cavities in their open structures. Interest in this class of materials is growing, and there has been an explosion of studies over the last decades on their occurrence, synthesis routes, properties and applications. Both natural and synthetic varieties exist, and they represent the intersection between mineralogy and material science. The aim of this special collection devoted to microporous materials is to assemble contributions on the crystal-chemistry, properties and utilizations of natural open-framework compounds and their synthetic counterparts, emphasizing the connections between mineralogy and materials engineering. The special collection Associate Editors for these papers are G. Diego Gatta and Paolo Lotti. Contact them for information about submitting new papers. Nanocrystalline minerals are ubiquitous in natural systems. They are characterized for having coherent domain sizes in the nanometer range, high specific surface areas and, usually, colloidal properties. All these properties make them important environmental sinks of pollutants and contaminants, as well and vectors for the colloidal transport of contaminants in the environment. The high density of broken bonds at their surfaces often allows for exceptional catalytic activity, and their frequent imperfect stoichiometry, that results from low-temperature and (or) of biogenic crystallization often leads to the presence of mixed-valent structures that possess a redox potential allowing for the degradation of molecules such as organics. On the other hand, mineral nanoparticles--the 'nano' version of bulk minerals--can form as the result of weathering or dissolution processes, under conditions of limited mineral growth, or even as transient phases during biotic and abiotic mineral formation processes. The advent of advanced characterization techniques for the detection of nanominerals and mineral nanoparticles in natural systems, as well as for their structural study has extended the now well established nanotechnology approaches to the mineralogical science. In this special issue we invite contributions dealing with the study of nanominerals and mineral nanoparticles, including their occurrences in different natural settings, their structural characterization and their reactivity. The special collection editors are Alejandro Fernandez-Martinez, David M. Singer, and Sylvain Grangeon. Contact them for information about submitting new papers. This is a special collection devoted to the dynamic research field of magma genesis at convergent margins. Magmatism at the Earth's subduction zones generates volatile-rich andesitic magmas that intrigue geoscientists by their compositional resemblance to continental crust, their role in recycling of solid Earth materials and their volatile-rich, explosive eruptions that can influence climate. This Special Section seeks contributions that address recent advances with respect to recycled slab materials (subducted oceanic crust, serpentinitized mantle) and eroded upper plate crust, primary melt composition (basaltic or silicic or both?), timescales, mass transfer and mechanisms of slab-to-surface transfer (fluids? silicic melts? mélange diapirs?) all of which control the elemental transfer from slab to surface and beyond. Case studies and conceptual approaches from all disciplines are welcome, including field studies, geochemistry, mineralogy, experimental petrology and geophysical approaches ranging from fluid dynamics to seismology. The Associate Editors for this special section are Susanne M. Straub and Heather Handley. Contact them for information about submitting new papers. Revealing the Origins of Our Solar System and Its Organic Compounds Through Analysis of Meteorites and Related Planetary Materials is a special collection for American Mineralogist that will gather papers from a 2018 AGU Fall Meeting session of the same name. We are exploring the origin and evolution of our Solar System and extraterrestrial organic compounds through the analysis of meteorites and other similar planetary materials (e.g., cosmic dust, asteroids, comets, and analog materials). The Associate Editors for this special section are Bradley De Gregorio and Eric Parker. Contact them for information about submitting new papers. This collection will be a thematic group of papers based on two sessions at the 2013 Annual meeting of GSA. This collection will incorporate new research on cutting-edge approaches to the study of crustal magmas and syntheses that place modern research concepts in an historical context. The special collection editors are Calvin Miller and Cal Barnes. Contact them for information about submitting new papers. Most metals in the periodic table occur naturally within sulfides, making them important minor constituents of igneous rocks. There is thus an increasing interest in the behavior of these metals in magmatic and hydrothermal systems of Earth and other terrestrial planets, relative to their use in unraveling the magmatic history of planetary objects. In this section, we invite submissions on the petrological and geochemical investigations into the behavior of sulfides and chalcophile elements in magmatic and hydrothermal systems. This section will combine experimental and modeling studies with studies of natural samples to address fundamental processes, such as planetary accretion and core-mantle segregation, the addition of siderophile and chalcophile elements during the late veneer stage of accretion, the budget of chalcophiles in the upper mantle, and the behavior and global cycling of chalcophile elements in subduction zones. The special collection Associate Editors for these papers are Kate Kiseeva and Raúl Fonseca. Contact them for information about submitting new papers. Recent geophysical observations have revealed that the Earth’s deep mantle and core are more complex than previously thought. Geodynamical and geochemical studies have extensively explored the dynamic evolution of the Earth’s silicate mantle and metallic core through the geological time. Physical and chemical properties of materials are fundamental information for interpretation of the observations and for constructing robust models. In this special collection, we focus on the physics and chemistry of the Earth’s deep mantle and core to better understanding the current nature and dynamic processes of the Earth’s deep interior. This collection seeks to attract contributions from both experimental and theoretical/computational mineral physics studies of deep Earth materials. Topics include, but are not limited to, phase relations, thermodynamics, elasticity, EOS, crystal chemistry, transport and rheological properties. We also welcome contributions from seismology, geodynamics, and geochemistry to provide a unified view of deep Earth that could guide research in mineral physics. The special collection editors are Ryosuke Sinmyo and Zhicheng Jing. Contact them for information about submitting new papers. In recent years, significant advances have been made in deciphering the rates of magmatic processes for a variety of magmatic systems, from small-scale mafic eruptions to felsic supereruptions. Timescales range from millennia to minutes, depending on the dating method employed and the magmatic system investigated. Large efforts have also been made in experimental determination of the pre-eruptive conditions for a variety of magmas from different volcanic settings, with depths of magma storage ranging from subcrustal to shallow crustal levels. This collection will bring together contributions that elucidate magma ascent rates to their ultimate storage depths (if storage occurs) and to the surface at the onset of eruption, and that provide tight constraints on the magma source. The papers employ a multitude of techniques that can provide insights on problems of magma thermobarometry, the timescales of magma transfer, remobilization, and eruption on Earth. The special collection editors are Georg Zellmer and Renat Almeev. Submissions are in process now -- contact the editors if you would like to contribute. Olivine is the dominant mineral in Earth’s upper mantle, and is a major phenocryst phase in mafic magmas. Thus, olivine-based studies provide a crucial perspective for understanding mantle and magmatic processes, as well as the role of mantle-derived magmas in crustal evolution. In recognition of the importance of olivine to understanding earth processes, a special session of Goldschmidt 2014 focused on this mineral. Papers are sought for this special issue that utilize the olivine perspective to examine mantle and magmatic processes including minor and trace element compositions of olivine, diffusion studies, thermobarometry of olivine-bearing systems, and olivine-bearing melt and/or fluid inclusions. Contributions that carry implications for olivine-bearing systems (e.g. redox equilibria) are also welcome. The special collection editors are Michael Garcia and Bruce Watson. Submissions are in process now -- contact Mike or Bruce if you would like to contribute. This is a special collection, focused on diverse topics, related to the structure, properties, and applications of natural and synthetic spinels and spinelloids. The collection aims to document the revival of interest in spinel materials, with emphasis on non-oxygen containing and nanosized structures. The hope to bring together experimental and theoretical research studies from mineralogists, crystallographers, petrologists, chemists, materials scientists, physicists, and other spinel aficionados. The special collection Associate Editors for these papers are Kristina Lilova, Kaimin Shih, Hiroshi Kojitani, and Ferdinando Bosi. Contact them for information about submitting new papers. This issue will focus on a core group of overview papers that summarize the major topics explored at the Second Conference on the Lunar Highlands Crust in 2012. In the spirit of this interdisciplinary meeting, we encourage collaborative efforts, particularly between co-authors from diverse sub disciplines, but would welcome any contributions related to topics presented at the meeting. Papers undergo normal peer review, and as papers are accepted, they are published. Short title: Lunar Highlands Revisited. The special collection Associate Editors are Rachel Klima and Peter Isaacson. Contact them for information about submitting new papers. Characterizing the bulk geochemistry of trace elements and their isotopes in sedimentary rocks has been used as the favored approach for the last two decades to investigate the chemistry of ancient oceans. However, diagenetic and metamorphic processes may complicate the interpretation of the paleo proxy records (concentration and isotope), mostly because of their heterogeneous distributions among the different mineral (and organic) phases. Applying in situ analyses at micro and nano scales can overcome this challenge. In this section, we will provide a collection of experimental and theoretical papers that will examine in situ trace element geochemistry in sediments and most importantly whether our interpretation of the bulk geochemical approach remains valid. The special collection Associate Editor is Daniel Gregory. Contact them for information about submitting new papers. Over the past few decades mineral microstructures from nano- to centimeter-scale have become an indispensable tool in unravelling tectonometamorphic histories including conditions of deformation and reactions. Analytical and experimental advances along with new numerical capabilities have made this possible. We welcome contributions that apply quantitative microstructural characteristics to the understanding of fundamentals of processes within the 's lithosphere and/or show examples of exciting new methods for characterizing microstructures that (promise to) give advanced insight into how microstructures develop and evolve through time, reflecting their influence on large-scale geodynamic processes. We welcome contributions from disperse fields such as structural geology/petrology/microstructures, volcanology, including field studies and/or laboratory experiments and/or numerical modeling. We hope that this special collection will provide a state-of-the art collection of original works highlighting the exciting new insights and future perspectives of modern mineral physics across the Earth Sciences. The special collection Associate Editors for these papers are Patrick Cordier and Sandra Piazolo. Contact them for information about submitting new papers. This collection was based on a session at the Fall 2011 GSA meeting. Submissions on this topic to continue the conversation are welcome as part of the scope and mission of American Mineralogist articles. The special collection Associate Editors for these papers are Callum Hetherington and Gregory Dumond. Submissions on this topic to continue the conversation are welcome as part of the scope and mission of American Mineralogist articles. Volatile elements including hydrogen, carbon, nitrogen, oxygen, sulfur, and the halogen group elements play an important role in the dynamics, structure, and evolution of terrestrial planets. In particular, volatile elements within the interior of a differentiated planetary body can influence a wide range of chemical and physical properties including redox state, conductivity, rheology, viscosity, melting, shock, degassing and the partitioning of other elements. Papers that address the role of volatile elements in planetary interiors, including the cycling of volatile elements within the Earth or the interaction between surface and mantle reservoirs in differentiated bodies, the stability of volatile element-bearing phases at extreme pressure-temperature conditions, the behavior of volatile elements from surface to core, and the influence of volatile elements on planetary-scale processes within the Earth and other terrestrial bodies are particularly encouraged. The special collection Associate Editors are Anne H. Peslier and Elizabeth C. Thompson. Submissions are in process now -- contact the editors if you would like to contribute. This special section is focussed on different aspects of water in hydrous and nominally anhydrous minerals (NAMs) in crust and mantle with special respect to recent analytical and experimental developments. Applications of new theoretical, analytical and experimental approaches for characterizing water in the minerals as well as contributions addressing the storage, speciation and quantifying water in minerals at different physico-chemical conditions are encouraged. The special section also aims to discuss the effect of water on the chemical and physical properties of minerals and their relation to geodynamics. The special collection Associate Editors are Roland Stalder, Nathalie Bolfan-Casanova, and Istvan Kovacs. Submissions are in process now -- contact the editors if you would like to contribute. Papers from 2012 AGU session: Sulfates, phosphates, and perchlorates have been found on Mars from orbit and/or from surface missions. Identification of these minerals (suites) can enable constraints on the Martian geochemical environments. This session will generate a discussion regarding the conditions for the formation of these minerals on Mars, and methods for identifying their geological environments and related fluid chemistry. Abstracts describing discoveries of sulfates, phosphates, or perchlorates from surface missions and orbital spacecraft data, and those covering laboratory analyses and thermodynamic modeling relating to the hydration state of these minerals and their stability on Mars are included. Submissions on this topic to continue the conversation are welcome as part of the scope and mission of American Mineralogist articles. Short title: Martian Rocks and Soil. The special collection Associate Editors were Darby Dyar, Melissa Lane, and Janice Bishop. Submissions on this topic to continue the conversation are welcome as part of the scope and mission of American Mineralogist articles.
2019-04-18T15:27:00Z
http://www.minsocam.org/MSA/AmMin/special-collections.html
The interaction environment of a protein in a cellular network is important in defining the role that the protein plays in the system as a whole, and thus its potential suitability as a drug target. Despite the importance of the network environment, it is neglected during target selection for drug discovery. Here, we present the first systematic, comprehensive computational analysis of topological, community and graphical network parameters of the human interactome and identify discriminatory network patterns that strongly distinguish drug targets from the interactome as a whole. Importantly, we identify striking differences in the network behavior of targets of cancer drugs versus targets from other therapeutic areas and explore how they may relate to successful drug combinations to overcome acquired resistance to cancer drugs. We develop, computationally validate and provide the first public domain predictive algorithm for identifying druggable neighborhoods based on network parameters. We also make available full predictions for 13,345 proteins to aid target selection for drug discovery. All target predictions are available through canSAR.icr.ac.uk. Underlying data and tools are available at https://cansar.icr.ac.uk/cansar/publications/druggable_network_neighbourhoods/. The need for well-validated targets for drug discovery is more pressing than ever, especially in cancer in view of resistance to current therapeutics coupled with late stage drug failures. Target prioritization and selection methodologies have typically not taken the protein interaction environment into account. Here we analyze a large representation of the human interactome comprising almost 90,000 interactions between 13,345 proteins. We assess these interactions using an extensive set of topological, graphical and community parameters, and we identify behaviors that distinguish the protein interaction environments of drug targets from the general interactome. Moreover, we identify clear distinctions between the network environment of cancer-drug targets and targets from other therapeutics areas. We use these distinguishing properties to build a predictive methodology to prioritize potential drug targets based on network parameters alone and we validate our predictive models using current FDA-approved drug targets. Our models provide an objective, interactome-based target prioritization methodology to complement existing structure-based and ligand-based prioritization methods. We provide our interactome-based predictions alongside other druggability predictors within the public canSAR resource (cansar.icr.ac.uk). Funding: This study was funded by the Cancer Research UK core funding to the Cancer Research UK Cancer Therapeutics Unit at the Institute of Cancer Research, London, grant number C309/A11566. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: I have read the journal's policy and the authors of this manuscript have the following competing interests: The authors work at The Institute of Cancer Research, London, UK, which has a commercial interest in the discovery and development of anticancer drugs. Identifying novel drug targets and prioritizing proteins for target validation and therapeutic development are essential activities in modern mechanism-driven drug discovery, and are key if we are to benefit from large-scale genomic initiatives . Multiple approaches exist to estimate the ‘druggability’ or chemical tractability of a protein [2–4]. 3D structure-based assessments predict cavities in the protein structure that are capable of binding small molecules [3–5]. Alternative methods include sequence feature-based druggability [4,6] and ligand-based methods that examine the properties of compounds known to be bioactive against a protein [7–9]. While many genes have been identified as disease-causing (see for example reports on cancer [10,11] and cardiovascular disease ), the products of relatively few of these have become targets for approved therapeutics. The challenges facing researchers attempting to target a gene and its product proteins for clinical application lie both in validating their pathogenic role and in their technical ‘doability’. As well as possessing a pocket or interface suitable for drug binding, a potential drug target must exert an appropriate influence on the system, enabling a drug to have a selective and enduring therapeutic effect. Genetic diseases, prominently cancer, are disorders arising from deregulation or disruption of normal cellular wiring and protein communication. It is therefore essential that the network environment of a potential drug target should be incorporated into target selection rationale. Previous studies have highlighted the importance of considering the interactome when predicting protein function [13,14], assessing drug-target interaction data and understanding polypharmacology [9,15], or predicting novel uses for drugs [16–18]. Meanwhile, recent technological advances in systems biology have generated large quantities of experimentally-derived protein interaction data and networks have been applied to understand the relationships between these protein interactions and disease [20–24]. For example, relationships between protein interactions and cancer have been identified by integrating protein interaction networks with functional or gene expression data [25,26]; structural differences in the network between cancer-causing and non-cancer-causing genes have been highlighted [24–26]; and a potential core ‘diseasome’ network has been documented . Tantalizingly, a number of studies have examined the distribution of some focused topological network parameters, such as degree and clustering coefficient, in drug targets versus non-drug targets [17,18,28]. Most notably, the number of first neighbors (degree) was identified as a distinguishing feature of the human ‘highly optimized tolerance’ or ‘HOT’ network and was proposed as a measure to consider when selecting drug targets. This proposition was based on the assumption that inhibiting proteins with a high degree will impact widely on a biological system and thus have undesirable effects . While such extrapolations may not always hold true—for example, many cancer-drug targets are major hubs yet their modulation, singularly or in combination, shows clear selectivity for cancer cells (see references [29,30] and discussed below)—these studies have highlighted the potential of network parameters to provide discriminatory patterns for identifying druggable network nodes. However, such studies imply that any patterns that may exist to distinguish drug targets from other proteins are likely to be complex. Indeed, no purely network-based discriminatory models have been described. Instead, reported models include functional or family annotation [4,6,17,18,28,31] to ensure predictive power. These functional and family annotations overshadow any network parameters due to the dominance of certain protein families (e.g. G-protein coupled receptors) or functions (e.g. enzymes) in the training set of known drug targets [7–9,32]. Thus, the true network behaviors that may distinguish the points in a network most suitable for therapeutic intervention remain elusive. Consequently, despite the fundamental role of cellular wiring in drug action and resistance, there is, to our knowledge, no network-based druggability predictor in existence in the public domain. In this article, we present a comprehensive and systematic computational analysis of 321 topological, community-based and graphical network properties of a fully-connected human interactome. Furthermore, we show how these properties relate to druggability in its more complete sense: the suitability for intervention with a molecularly targeted therapeutic agent of any type. In particular, we explore the differing network environments of cancer-drug targets and targets from other therapeutic indications and discuss the potential impact that these differing network environments may have on resistance to cancer drugs. We build and benchmark the first publicly available predictive network-based models to identify likely druggable network nodes and node clusters, and apply these models to a set of 13,345 proteins in the human interactome. To our knowledge, these are the first published models that enable a prediction of druggability based on the topological, graphical and community behavior of proteins in the interaction network. Our network method is intended to be used to complement structure-, sequence- and ligand-based druggability prediction methods in order to provide a holistic view of the likely utility of a given protein as a drug target. The results of this analysis have been implemented in the canSAR knowledgebase [10,11,33,34] and each protein’s network signature, as well as its predicted druggability score, is accessible alongside other druggability measures at http://cansar.icr.ac.uk/. Having constructed the high quality interactome, we constructed four distinct, manually curated training sets representing: a) all targets of FDA-approved drugs, which in turn was divided into b) targets of cancer drugs and c) targets of drugs from non-cancer therapeutic areas; and finally d) cancer proteins—the products of cancer associated genes. We then trained suites of predictive models using each of these training sets to predict druggability using only network topological, community and graph features (see Methods and Fig A in S1 Text). Our predictive models achieve a mean area-under-the-curve (AUC) value of 83% (see also Fig E in S1 Text for recall precision). The significance compared to random prediction is as follows: drug targets (all therapeutics areas), p-value < 2.0−16 compared to 0.018 for the randomized network model; cancer drug targets, p-value = < 2.0−16 compared to 0.125 for the randomized network model and non-cancer-drug targets, p-value = < 2.0−16 compared to 0.331 compared to the randomized network model. Thus, the models have high predictive power using extensive in silico validation and can, therefore, be used to enrich potential drug targets during target prioritization for drug discovery. The full results are provided in S1 Table and per-protein analyses are supplied within the canSAR resource (https://cansar.icr.ac.uk/) alongside previously described structure-based and ligand-based methodologies [4,12,34]. We found that several network parameters show distinct distributions in drug targets or targets of cancer versus non-cancer drugs (key parameters are shown in Fig C in S1 Text). On average, drug targets have a higher degree (i.e. more first neighbors) than non-drug targets. Whilst the mean degree of drug targets is 26.34, this is primarily due to cancer targets which have a mean degree of 47.21 (compared to 13.72 for targets from other therapeutic areas (TAs) and 12.65 for the background—see Table C in S1 Text). We also found that targets of cancer therapeutics have more neighbors and tend to be more hub-like than the average cancer-associated proteins (Table C in S1 Text). In fact, considering the interactome as a whole, out of the 50 proteins with the highest number of interactions, only six (SRC, EGRF, ESR1, AR, HDAC1 and FYN) are drug targets, all of which are targeted by cancer drugs. This indicates that a large number of first neighbors are not generically associated with being a drug target, but rather the average is skewed by a few highly connected cancer-drug targets. Network articulation points are nodes that are critical for communication within the network and their removal would disconnect the network into separate graphs or break off peripheral nodes into unconnected singletons. Our analysis shows that 15% of all drug targets, 17% of cancer-drug targets and 14% of non-cancer drug targets are articulation points as compared to 9% of the background set (Table C in S1 Text). However, this enrichment is more statistically significant for all drug targets (p-value = 0.0003) and targets of cancer drugs (p-value = 0.0026) than for targets of non-cancer drugs (p-value = 0.0192). Being an articulation point is akin to an ‘ambassador’ between regions of the network and is a property that is enriched in cancer-drug targets. Interestingly, most articulation points from the cancer-drug target set are through nuclear hormone receptors (NHRs) and receptor tyrosine kinases (RTKs), which are logical gateways for signaling. We also found that cancer targets are more embedded in their local environment than targets from other therapeutic areas (using Burt’s network constraints [13,14,35] and closeness centrality [9,15,36]; Fig C in S1 Text). In summary, our analysis of 28 topological parameters indicates that there are distinguishing patterns of behavior between 1) cancer-drug targets; 2) targets of non-cancer drugs; and 3) the background interactome as a whole. This indicates that the topological parameters can be used as useful features in a predictive model for ab initio identification of drug targets for cancer or for non-cancer drugs. A community within a network is defined a set of nodes that are densely connected within subsets of the full interactome (see Methods) but may not all interact directly with each other [16–18,37]. Proteins in a community may be linked together via a function, such as belonging to a particular cellular process. Previous studies have shown the relationship between biological function and network communities [19,37,38]. Drug targets from non-cancer therapeutic areas tend to be members of smaller communities compared to other proteins (Fig C in S1 Text). Interestingly, cancer-associated proteins participate in significantly larger communities, indicating the far-reaching effects of biological malfunctions in this class. Furthermore, cancer-drug targets differ from non-cancer-drug targets when considering their community pattern of interactions. To assess the type of community interactions that a protein is involved in we developed a vertex modularity score based on the proportion of interacting neighbors that are in the same community (see Methods). We find that non-cancer-drug targets tend to interact intra-community, whereas cancer-drug targets, interact both intra- and inter-community. This indicates that while targets of non-cancer drugs address specific functions and defined processes, cancer-drug targets may have wider reaching effects on different cellular functions. This pattern holds equally true for targets of classical cytotoxic cancer drugs, such as tubulin, as for the modern class of cancer genome-targeted cancer drugs, such as kinase inhibitors (Fig D in S1 Text). Complex networks can be divided into smaller sub-graphs, or graphlets, of increasing complexity , (see Methods and Fig B in S1 Text). We find a striking difference in the behavior of cancer-drug targets as compared with targets of non-cancer drugs (Fig 1). Not only are cancer-drug targets significantly more active in graphlets (on average involved in 368 million target-graphlet activities compared to 121 million activities for non-cancer targets), but they are also more commonly seen in complex graphlets (such as G26, G27, G28 and G29) than can be expected at random (Fig 1). Importantly, while targets of cancer therapeutics are enriched in these graphlets when normalized against the interactome background, we find that targets of other therapeutic areas are, in contrast, slightly depleted or show little change from background. On average cancer-drug targets have more publications per node than non-cancer drug targets (with 39 versus 11 publications per node) and both sets are better studied than the background interactome (which has an average of 7 publications per node). Approved cancer drugs primarily target a single functional sites on a single type of protein (such as the catalytic site of a kinase and cytochrome P450s or the hormone binding sites on a hormone receptor). Currently most drugs that target heteromeric complexes fall in non-cancer areas (such as the ligand-gated ion channel blockers used to treat disorders of the central nervous system). The graphlet enrichment pattern that we have described here may be due to cancer targets being members of more transient signaling cascades or transcriptional complexes. Fig 1. Enrichment and depletion of key parameters in drug targets over what can be expected at random from the interactome. A) Graphlets and their constituent isomorphism orbits. The graph shows the graphlets and orbits, ordered by descending size and complexity, most enriched in cancer-drug targets (light blue bars). These same graphlets and orbits are either slightly depleted or not differentiated from random in targets of non-cancer drugs (dark blue). The gray line represents graphlets size and complexity (high-to-low). B) The distribution of detected community sizes and the enrichment or depletion of cancer drug targets (light blule) versus targets of drugs used to treat other diseases (dark blue). C) Box plots showing distinction of degree and google page rank; as well as the vertex modularity which distinguishes inter- versus intra-community communication of nodes. Further parameters are shown in the Supporting Information. Thus far, we have discussed some of the parameters that most obviously differentiate the target sets (cancer targets, non-cancer targets, the background interactome). In order to uncover which network features play the most important role we examined the feature contributions for each of the models generated (see S1 File). We find that no single feature is sufficient for discrimination between the target sets. For example, for the GBM models, the range of maximum relative information carried by any one feature was between 4.78% for the all-drug-target model and 5.86% for the non-cancer target model. Similarly, the maximal mean standard error (%MSE) effect of any one feature for the random forest models was 0.02% across all models. Interestingly, although the top features reported by the different models vary, community and graphlet-based features dominate the list of top 20 highest features produced by all the models, while topological features rank lower. Of the 343 drug targets in our interactome, 310 (90%) ranked in the top 25% for druggability according to the overall drug target model. This is a 3.6-fold enrichment of drug targets compared with what might be expected if proteins were ranked at random. In addition to the 343 targets of already FDA approved drugs, a further 3,026 currently undrugged proteins in this top-quartile most druggable set (see S1 Table), highlighting them as potentially suitable for drug discovery. The lowest ranking was CYP17A1, the molecular target of abiraterone with a rank of 49%. Additionally, we examined how our network-based assessment of druggability performed in relation to several targets of drugs that are under clinical investigation. We examined targets from different protein families and molecular classes. We found that, despite not having representatives of their families, TLR7, BCL2, EZH2, and MDM2 for example, scored highly using the druggability models (74% or higher using the all-drug-target model; and 85% or higher using the cancer-drug-target model). Full results for seven targets of investigational therapeutics are shown in Table F in S1 Text. To examine the persistence of the signal we investigated the predictive models and target coverage across different datasets. First, we explored the effect of utilizing large-scale Yeast2Hybrid (Y2H) data instead of compiling all high-quality binary interaction data from different sources. Although the Y2H technique is unbiased, we found many interactions were missing from the interactome. This is probably because the full matrix of bait-and-prey proteins has not yet been fully examined. We describe this analysis in detail under the section ‘Defining the interactome’ in S1 Text. In summary, we compiled a large Y2H interactome by collecting Y2H data from 5,537 publications, including 30 publications reporting at least 70 proteins. This resulted in an interactome containing 10,998 proteins and 47,994 interactions–covering 256 of the 345 drug targets in the training set. To complement this we compiled a more comprehensive interactome containing all Y2H studies and high quality data from multiple sources (see Methods). This resulted in an interactome containing 13,345 proteins and 89,691 interactions and covered 343 of the 345 drug targets in the training set. As well as missing 26% of the drug target training set, the large Y2H interactome was missing many known interactions which should have been detected using the methodology. For example, despite MTOR and its complex components such as FKBP1A and DEPTOR being nodes in the Y2H studies, no interactions between them have been reported so far, despite these interactions being experimentally validated outside of Y2H studies . We found that the predictive power of the models is stronger when network properties were calculated using the full interactome rather than the Y2H interactome (See Fig G in S1 Text). We detail all models and prediction results in the Supporting Information. Additionally, we defined non-redundant versions of the training sets based on protein sequence similarity, drug chemical similarity and therapeutic class similarity (see Methods). Again, we found that models built using these training sets underperformed in comparison with models built on the full training set (Fig H in S1 Text and section ‘Further Information’). To identify potential novel drug targets, we collated the top 20 proteins, that are not targets of approved pharmaceuticals and are predicted to be druggable using each of our three models (removing any duplicates). This resulted in a set of 49 proteins shown in S2 Table with their rank from each model and any known link to a disease (using the Online Mendelian Inheritance in Man, OMIM and the Cancer Gene Census ). The distinctions in the prediction ranks illustrate the significant differences between drug targets for cancer and drug targets for non-cancer diseases. The 49 proteins fall into 28 protein families. Despite not including any functional or family annotation in the training descriptors, and focusing only on network parameters, we find enrichment in a number of families and classes. The list of 49 proteins contains 18 enzymes, of which six are phosphatases and three are protein kinases. It also contains five G-protein coupled receptors (GPCRs) and five ephrin ligands of receptor tyrosine kinases. Interesting, as well as identifying targets that are druggable, the network-based method additionally identified ligands of drugged or druggable proteins. In summary, the results obtained using our predictive network-based models reflect the enrichment of these druggable target families that is seen in targets of approved pharmaceuticals [32,42]. Additionally, among the 49 top proteins are 18 cell surface proteins and several secreted growth factors (S2 Table). It is interesting that these protein classes are identified as druggable although they are not significantly represented in the training set. However, as the training sets include all targets of FDA-approved drugs, be they targets of small molecules or biotherapeutics, it is reassuring that these cell surface targets are scoring highly as they can potentially be drugged by biotherapeutics such as monoclonal antibodies. Furthermore, at least two of these cell surface or secreted proteins are known ligands of existing drug targets (S2 Table). Similarly, several of the top-scoring proteins are adaptor proteins, three of which are known to interact with existing drug targets. Overall, 23 of the 49 proteins have direct interactions with targets of FDA-approved drugs. Thus the methods seem to identify druggable neighborhoods in the interactome as well as individual druggable nodes. Several of the top-scoring proteins of the whole interactome (S1 Table) are similarly ligands or direct interactors of drug targets, indicating that the predictive models are identifying druggable connections or network neighborhoods and not just individual drug targets. To compare our network druggability assessment with other methods of scoring druggability, we used the protein annotation tool in canSAR [17,34,42] to obtain 3D structure-based and ligand-based druggability information for our top 49 proteins (S2 Table). Approximately half of them (24 proteins) can be linked to disease using OMIM or the Cancer Gene Census. We found that 31 of the 49 proteins have 3D structures available and, of these, 16 (52%) have at least two independent structures that are predicted to possess druggable cavities [17,34,42,43]. In comparison with the coverage of the proteome for which an estimated 25% is predicted to be druggable by the same criteria [29,30,34,44], this is a 2-fold enrichment in druggability and shows a degree of concordance between the two independent network- and structure-based druggability predictions, without the bias of functional or family annotations. The overlap may increase in the future with improved coverage of 3D structures for the proteome. Twenty four of our top 49 proteins are bound by bioactive small molecules (S2 Table) at sub-micromolar concentrations, according to the medicinal chemistry literature [34,43]. Using the ligand-based chemical druggability score that ranks targets based on the drug-like properties of bioactive compounds [34,42], we find that 28 of the 49 proteins rank in the top 25% most druggable proteins in the proteome showing a 2.3-fold enrichment over what would be expected at random. Again this highlights that the output from our network-based methodology overlaps, and complements, other independent measures of druggability despite using completely different training sets and parameters. Note that many targets cannot be assessed for ligand-based druggability or have low scores because of a lack of available chemical compound bioactivity data; thus the overlap may well increase with time as more targets are chemically explored . An annotated, community-correlated map of the human interactome as described in this study is shown in Fig 2A. Although at first glance the targets of FDA-approved drugs (blue and pink) appear widely distributed, detailed inspection shows that they are concentrated in certain areas, often clustered together, whereas non-cancer drug targets are more widely distributed. There are 148 communities of size greater than 4 in this network, yet 70% of all drug targets are in the top 10 communities (Fig 2B). Furthermore, one community shows a 23-fold enrichment in the number of cancer-drug targets that it contains over what would be expected at random (Fig 2C). This probably reflects historical biases where focus was on a few easier-to-drug families or on specific, well-studied disease pathways. However, there are druggable opportunities across most regions of the interactome, as shown by the proteins that are predicted to be druggable using protein 3D structural parameters and the network parameters described in this work. Comparing the output from the three orthogonal predictors of druggability (Network-based, as presented in this method, 3D structure-based and chemical/ligand-based ) shows significant overlap despite basing their predictions on completely independent properties (Fig I in S1 Text). Fig 2. Cancer-drug targets are enriched for highly connected Graphlets. A) Interaction network highlighting the distribution of targets of approved cancer drugs (pink); targets of approved drugs from non-cancer therapeutic areas (blue); and targets predicted to be druggable by different druggability prediction methodologies(light and dark green). Druggable proteins are spread widely across the network while targets of current approved drugs tend to cluster into few areas. B) Cumulative fraction of all drug targets covered by communities. As indicated, a small number of communities cover the majority of drug targets. C) The network communities most enriched in drug targets are listed against the fold enrichment of the number of targets found in them (compared to what can be expected at random). This global view highlights large numbers of potentially missed opportunities and novel target spaces that can be explored, provided that these potential targets are validated for disease causation. Chemical exploration of these barren areas of the interactome, that are predicted to be druggable by both structure- and network-based methodologies, may well yield novel targets for future drug discovery. There is a striking difference in the behavior of the cancer-drug versus non-cancer drug targets in the key network parameters described above, such as community behaviors and graphlet structures. This poses an important question: do these, apparently inherent properties of cancer-drug targets, make it easier for the cell to adapt signaling cascades and remodel the network in response to target inhibition? Furthermore, does this contribute to the emergence of drug resistance? Acquired drug resistance through remodeling of signaling pathways is frequently encountered in cancer therapy [30,45] and one possible way of overcoming such resistance may be through the use of combinations of drugs that target proteins occupying different network environments. We compared the network parameter profiles of the targets of well-studied drug combinations (detailed in section ‘Further information’ in S1 Text) using the limited available data. Our analysis suggests that resistance to drug combinations is more likely to occur if they act on targets with similar network profiles, and which are in close proximity in a subnetwork (such as BRAF and MEK [46,47]) compared to drug combinations that act on targets with different network environments [46,48,49] (see Fig 3). Fig 3. Network profiles and interactions between targets of drug combinations. A) Radar plots showing representative network property profiles of targets of drug combination. MEK and BRAF network property profiles are more similar to one another than the network profiles of CDKs and HMGCR. This may be related to the long-term effectiveness of the combinations of drugs targeting these proteins. B) Interactions between proteins targeted by drug combination showing high level of connectivity between targets such as EGFR, BRAF and MEK. The dotted edge indicates that no direct interaction takes place between HMGCR and the other proteins in the network. When combining drugs acting on targets with close network proximity, the inhibition of three or more of these targets seems to be required to prevent the emergence of resistance . Despite these intriguing observations, there is insufficient experimental data to allow statistical examination of whether combinations targeting different network environments have a longer-lived effect than those targeting proximal and similar network nodes. We have presented a systematic, large-scale comparison of 321 topological, community and graphical network parameters for a fully connected interactome of 13,345 proteins and almost 90,000 interactions, totaling 4.2 million calculated properties. We identified significant differences in the network environments that are occupied by cancer-drug targets, non-cancer-drug targets, and the overall interactome. We found a major difference between the degree of cancer-drug targets which tend to have a greater number of first neighbors and be more hub-like, and the degree of non-cancer-drug targets, which, have fewer first neighbors than the interactome average. We found that cancer-drug targets tend to communicate both within and across network communities unlike non-cancer-drug targets that primarily communicate within their communities. Overall, community behavior and subgraph connectivities played the most significant roles in this distinction. Indeed it takes a complex interplay of topological, graphical and community behaviors to provide discriminatory signatures that can distinguish cancer-drug targets from non-cancer-drug targets and from the interactome as a whole. These signatures led to the generation of predictive models that predict druggable network nodes and neighborhoods with an average accuracy of 83%. As well as identifying targets of approved drugs, the network druggability prediction models described here identified both potentially druggable targets and target local neighborhoods, providing an independent and complementary method of assessing the suitability of a target for therapeutic modulation. The methods presented in this study use only network parameters and the training sets include targets of all approved therapeutics and not just small molecule drugs. Despite this, the output of our network models showed strong concordance with the output from other orthogonal methods that use 3D structural information or ligand binding data to predict druggability. To enable the research community to use our methodologies for objective and independent target prioritization, we have provided the results of our network-based predictions alongside structure-based and ligand-based druggability results within the canSAR website (https://cansar.icr.ac.uk). These models are already useful predictive tools, the predictive power of which can only improve with the elucidation of the full human interactome and the mapping of disease-specific temporal interactions. Exploration of the network parameters of targets in several examples of resistance to cancer drugs and mechanisms for synergistic drug targets suggests that the combination of modulators of distinct environments within the cell may be a more effective approach to overcome drug resistance than modulating targets with similar network environments. As more data from systematic, large-scale drug combination screens and clinical practice becomes available, we will be able to explore the extent to which such predictions of effective drug combinations are useful and if the can provide us with an a priori systems view of selected therapies. The global view of the interactome presented here provides insights into important, but often neglected, systems-based considerations that should be included when selecting a target for therapeutic investigation which have the potential to inform better drug combinations. Data imbalance, redundancy and lack of clear quality measures are all problems in defining the human interactome [19,31]. The ideal solution would be the availability of a comprehensive and unbiased protein-interaction data collection. Data from Yeast-2-Hybrid (Y2H) studies (e.g. [50,51]) are making headway towards this goal, yet currently only cover a fraction of the human interactome (detailed in ‘Further information’ in S1 Text). Nonetheless, for objective comparison, we created three separate views of the human interactome: Set A) comprising only published Y2H studies from large-scale Y2H publications containing at least 1000 proteins–this interactome contains 7,722 proteins and 24,406 interactions; Set B) all Y2H data that we could identify in the public domain–this utilized 5,537 publications and includes 10,998 proteins and 47,994 interactions; and Set C) the full experimental interactome including all Y2H publications as well as other high quality interaction data. For the Set C interactome we collected the human protein-protein interaction data from the partners of the International Molecular Exchange Consortium (IMEx ), Phosphosite (http://www.phosphosite.org/), and structurally-determined complexes from the Protein Data Bank . We removed ambiguous interactions derived from converting a protein complex into a set of binary interactions. We created a network using R igraph package . In order to compensate for the differing amounts of interactions between the proteins, we removed isolated proteins and isolated small subnetworks (Fig A in S1 Text). This resulted in a single network consisting of 13,345 proteins with 89,691 interactions and no unconnected nodes or networks. Despite our stringency in data selection, this network still contains roughly 66% of the proteome and 37% of the total predicted interactome . We have defined a number of target/protein classes. Firstly, the positive ‘drug target list’ is a list of manually-curated targets of FDA-approved pharmaceuticals [32,34], defined using strict criteria based on known pharmacological action and drug approval information. Thus it is strictly confined to the curated efficacy targets of the drugs rather than targets that may bind a drug without therapeutic effect. The ‘drug target’ list includes targets of both small molecule drugs and biotherapeutics. A total of 343 human drug targets were successfully mapped to the network: of these, 127 are targets of cancer therapeutics, constituting the ‘cancer target list’, while the remainder comprise the ‘drug targets, other therapeutic areas (TA)’ list (Fig A in S1 Text). Finally, we also define a fourth ‘cancer-associated’ protein list, containing proteins that contribute to the pathology of cancer, to be a superset of the cancer-drug targets and protein products of genes from the Cancer Gene Census . Thus 633 of the proteins in the network are labeled as ‘cancer-associated’. For each of the four defined positive sets (e.g. all drug targets) a matching ‘background’ dataset was defined as the remainder of the 13,345 proteins in this study (Table A in S1 Text). To address the bias caused by the correlation between the degree, or number of first neighbors, and other topological descriptors (e.g. hub-score), we further classified the datasets into three categories depending on the number of first neighbors: low (≤5), medium (6–30) or high (≥31) degree (Table A in S1 Text). We examined each of the network descriptors analyzed for the full datasets as well as for these, degree-dependent subclasses of each dataset. Additionally, we created non-redundant representatives of the training sets: 1) we clustered targets based on sequence similarity using a sequence identify cut-off of 50%; BLAST E-value ≤10−6 and at least 30% sequence overlap reducing the drug target training set from 343 to 246 targets, 2) we clustered targets based on the Anatomical Therapeutic Chemical Classification System (ATC) level-3 therapeutic/pharmacological subgroups, reducing the drug target training set to 82, and 3) we clustered the targets based on shared chemical scaffolds using Bemis and Murcko framework definitions; this reduced the target set to 283. We calculated a total of 321 properties that fell into three categories: topological, graph-based and community based features (detailed in Table B in S1 Text). We calculated 31 global- and local-network topological parameters using the igraph package and the Disconnectivity Valuation tool DiVa . We also calculated the Dice similarity coefficient based on fractions of shared neighbors which we converted to a distance matrix and performed multidimensional scaling. We used the two primary dimensions V1 and V2 as part of our topological descriptor set. For community detection, we applied two types of algorithms: Random Walk and Spin-Glass as implemented in igraph. The function walktrap.community was applied with a random walk of length = 4 and spinglass.community was applied with a predefined number of communities set to 50. We developed a bespoke measure of protein community communication behavior, the vertex modularity (VM) computed as the number of a protein's neighbors that are in the same community divided by the total number of neighbors a protein has. Therefore, a high VM number means the protein’s neighbors are in the same community and therefore the protein favors intra-community communication while a low VM number indicates the protein favors inter-community communication. We used GraphCrunch to calculate subgraphs previously described as a means of fragmenting networks into smaller graphlets (Fig B in S1 Text). The nodes within these graphlets can be classified into ‘isomorphism orbits’ (here referred to simply as ‘orbits’), that reflect the pattern of interactions within the graphlet. We created four training datasets comprising the positive and background sets (Fig A in S1 Text): drug targets (all TAs), cancer-drug targets, non-cancer-drug targets, and cancer-disease associated proteins. We further split each of these sets into four degree-based subsets as described earlier (all, high degree, medium degree and low degree). This resulted in 16 datasets for modeling (Table A in S1 Text). It is important to note that some of the subsets are very small, such as the ‘Low’ cancer drug target set which contains only 16 proteins and the ‘High’ non-cancer drug targets which contains 23 proteins. These sets are too small for effective model building. We inputted the 321 descriptors, calculated for each of the 16 sets, into three distinct predictive modeling algorithms: Random Forests , Gradient Boosted Machines (GBM ) and Generalized Linear Models (GLM ). Since we can only label proteins as drug targets or background or unlabeled proteins (i.e. it is not possible to assign a negative training set as it is not possible to say which proteins are currently are not drugged but may become successful drug targets in future) we apply a positive-unlabeled (PU) learning paradigm (see e.g.). Using the data derived above, we constructed several models to predict: 1) general druggability (the likelihood of a protein to be a drug target for any therapeutic area); 2) cancer druggability (the likelihood of a protein to be a cancer-drug target); 2) non-cancer druggablity (the likelihood of a protein to be a drug target for a non-cancer therapeutic area); and finally 4) cancer-association (the likelihood of a protein to be a cancer-associated protein). Table 1 reports the results of a 10-fold cross validation for the All, Low and Medium datasets and a 5-fold cross validation for the High dataset due to the small minority class. In total, we built 450 models. Table 1. Results of the 10-fold cross-validation of predictive models. Predictive modeling of network data poses an interesting problem when it comes to training the model. The standard is to report the results of a k-fold cross-validation (CV). For example in 10-fold CV, the data are split into training and validation sets and the model is built using the 90% training subset and validated on the 10% subset. This process is repeated 10 times and the average accuracy of the validation is reported as the prediction accuracy. This method is widely adopted as it approximates how the model will perform on new, unseen data. However, with a network, each instance is dependent on other instances as the descriptors are based on the instance’s position in the network. Consequently, using a holdout-set is nonsensical, as there can be no new cases without generating the network data again. To overcome this problem with the 10-fold CV, we recreated random training sets that maintained the structure of the network and the number of positives, but where the positives were allocated to random proteins. We carried out a 10-fold CV on these random sets to compare to the predictive results observed from the true training sets. Another problem for the predictive modeling of the network was the imbalance of the data. The minority classes ranged from 1% to 5% and therefore regression models were built rather than 2-class classification models. As our data comprises only PU data sets, we report the results based on a ranked evaluation of area under the curve (AUC). We ranked the predictions according to their average regression output and calculated the percentile, for example, a score of 78% means that 78% of proteins had a lower rank than this protein. S1 Text. Additional supplementary text and explanation of methodology, together with supplementary data tables and Figs referred to in the main document. S1 Table. The full annotation and network-based druggability predictions of the 13,345 proteins in this analysis (using the largest interactome). The full prediction results for 10,998 proteins using the largest Y2H-based models; model quality and AUCs for the Y2H models. S2 Table. Details of the top most druggable proteins identified using a network-based druggability analysis that are not themselves targets of FDA-approved drugs. S1 File. Individual predictive results and relative information content of each of the topological, community and graphical features used to train the models. S2 File. File containing the raw data used to generate the correlation plot in Fig F in S1 Text showing the limited correlation observed between the network topological, graphical and community-based features used in our analysis. S3 File. P-values of association between individual features of the drug target classes, namely all drug targets, targets of cancer drugs, or targets of drugs used in other therapeutic areas. We thank the creators of the igraph software Gábor Csárdi and Tamás Nepusz for helpful advice, and we are grateful to Joe Tym for technical help. We thank Nicky Evans for editorial assistance. We thank The Heather Beckwith Charitable Settlement and The John L Beckwith Charitable Trust for their generous support of out High Performance Computing facility which enables this work. Author Paul Workman is a Cancer Research UK Life Fellow. Conceived and designed the experiments: BAL. Performed the experiments: ACS CM. Analyzed the data: ACS BAL CM. Wrote the paper: ACS PW BAL CM. 4. Al-Lazikani B, Gaulton A, Paolini GV, Lanfear J, Overington JP, et al. (2007) The Holy Grail: Molecular Function—The Molecular Basis of Predicting Druggability. Bioinformatics: From Genomes to Therapies. Wiley-VCH, Vol. 3. 20. Sharan R, Ulitsky I, Shamir R (2007) Network-based prediction of protein function. Mol Syst Biol 3: 1–13. 23. Memisevic V, Milenkovic T, Przulj N (2010) Complementarity of network and sequence information in homologous proteins. Journal of integrative bioinformatics 7. 24. Milenković T, Memišević V, Ganesan AK, Pržulj N (2009) Systems-level cancer gene identification from protein interaction network topology applied to melanogenesis-related functional genomics data. Journal of The Royal Society Interface. 28. Yua QHJ (2012) The Analysis of the Druggable Families Based on Topological Features in the Protein-Protein Interaction Network. Letters in Ddrug Design & Discovery 9: 426–430. 35. Burt RS (1995) Structural Holes. Harvard University Press. 1 pp. 36. Freeman LC (1977) A set of measures of centrality based on betweenness. Sociometry. 41. McKusick-Nathans Institute of Genetic Medicine (2012) Online Mendelian Inheritance in Man, OMIM. 42. Patel MN, Halling-Brown MD, Tym JE, Workman P, Al-Lazikani B (2012) Objective assessment of cancer genes for drug discovery. Nat Rev Drug Discov 12: 35–50. 53. Csardi G, Nepusz T (2006) The igraph Software Package for Complex Network Research. InterJournal Complex Systems: 1695. 58. Newman MEJ (2004) Fast algorithm for detecting community structure in networks. Physical Review E 69: 066133. 59. Reichardt J, Bornholdt S (2006) Statistical mechanics of community detection. Physical Review E 74: 016110. 61. Breiman L (2001) Random Forests. Mach Learn 45: 5–32. 62. Ridgeway G (2006) Generalized Boosted Models: A guide to the gbm package. 63. Friedman JH (2001) Greedy function approximation: A gradient boosting machine. Annals of Statistics 29: 1189–1232. 64. Zhang DZD, Lee WSLWS (2008) Learning classifiers without negative examples: A reduction approach. Audio, Transactions of the IRE Professional Group on: 638–643.
2019-04-25T18:44:33Z
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004597
TLRs function as molecular sensors to detect pathogen-derived products and trigger protective responses ranging from secretion of cytokines that increase the resistance of infected cells and chemokines that recruit immune cells to cell death that limits microbe spreading. Viral dsRNA participate in virus-infected cell apoptosis, but the signaling pathway involved remains unclear. In this study we show that synthetic dsRNA induces apoptosis of human breast cancer cells in a TLR3-dependent manner, which involves the molecular adaptor Toll/IL-1R domain-containing adapter inducing IFN-β and type I IFN autocrine signaling, but occurs independently of the dsRNA-activated kinase. Moreover, detailed molecular analysis of dsRNA-induced cell death established the proapoptotic role of IL-1R-associated kinase-4 and NF-κB downstream of TLR3 as well as the activation of the extrinsic caspases. The direct proapoptotic activity of endogenous human TLR3 expressed by cancerous cells reveals a novel aspect of the multiple-faced TLR biology, which may open new clinical prospects for using TLR3 agonists as cytotoxic agents in selected cancers. The recently identified TLR family consists of a germline-encoded set of molecules thought to be critically involved in the detection of pathogens and the triggering of an immune response against microbial infections (1). Ligation of TLRs by their respective ligands triggers well-characterized signaling cascades that result in activation of downstream effectors, such as NF-κB, p38, JNK, and IFN regulatory factors (IRFs)4 (2); resistance against pathogens (3); and, occasionally, cell death (4), which is another way of protecting the host against microbe spreading (5). Such proapoptotic properties have indeed been demonstrated for TLR2 and TLR4, which can induce apoptosis in macrophages through signaling via the molecular adaptor MyD88 and the extrinsic Fas-associated death domain-caspase 8 pathway (4, 6) or via Toll/IL-1R domain-containing adapter inducing IFN-β (TRIF) and the mitochondrial death pathway (7), respectively. Moreover, TRIF by itself exhibits proapoptotic properties (8, 9, 10), thereby strengthening the link between TLR signaling and cell death. Double-stranded RNA, which represents either genomic or life cycle intermediate material of many viruses, activates cells through binding to the dsRNA-dependent protein kinase (PKR), a kinase that initiates a complex molecular antiviral program (11). Recently, dsRNA was also shown to be a ligand for TLR3 that triggers the production of type I IFN (12) Moreover, dsRNA has been reported to induce apoptosis in several cell types, apparently through multiple pathways. For instance, dsRNA transfected in pancreatic β-cells induces PKR- and caspase-dependent apoptosis (13, 14), whereas endothelial cell apoptosis triggered by exogenous dsRNA is mostly dependent on the extrinsic caspase pathway (15). However, no direct evidence has yet been presented regarding the role of TLR3 in dsRNA-induced apoptosis. TLR3 agonists have been used in the past, with variable efficiency, as an adjuvant to treat cancer patients, with the aim of inducing an IFN-mediated anticancer immune response (16, 17). Recent studies in mouse models have highlighted the adjuvant role of dsRNA in tumor vaccination, most notably through the promotion of Ag cross-presentation by dendritic cells and the induction of enhanced primary and memory CD8+ T cell responses (18, 19) However, because TLR3 is also expressed on nonimmune cells, such as keratinocytes (20) or endothelial cells (15), the question of a putative expression and role of this receptor in tumor cells needs to be investigated. In this study we have studied the effects of synthetic dsRNA on cancer cell survival and dissected the TLR3-dependent signaling pathways that can drive those cells to apoptosis. Human breast tumor cell lines (Cama-1, SW527, BT-483, and MCF-7) were obtained from the American Type Culture Collection and cultured in DMEM/Ham’s F-12 medium containing 4.5 g/ml glucose (Invitrogen Life Technologies) complemented with 2 mM l-glutamine (Invitrogen Life Technologies), 10% FCS (Invitrogen Life Technologies), 160 μg/ml gentalline (Schering Plough), 2.5 mg/ml sodium bicarbonate (Invitrogen Life Technologies), amino acids (Invitrogen Life Technologies), and 1 mM sodium pyruvate (Sigma-Aldrich; referred to as complete medium). Polyinosinic-polycytidilic acid (poly(I:C)) was obtained from InvivoGen. Peptidoglycan and LPS were purchased from Sigma-Aldrich. Type I IFNR-blocking mAb was purchased from PBL Biochemical Laboratories, and TNF-α-neutralizing mAb was obtained from Genzyme. Abs to Stat-1, phosphorylated Stat-1 (Tyr701), and PKR were purchased from Cell Signaling Technology. Abs to human IFN-β were obtained from R&D Systems, and Abs to NF-κB p65 subunit, TNFR-associated factor 6 (TRAF6), and β-tubulin were purchased from Santa Cruz Biotechnology. The general caspase inhibitor z-Val-Ala-Asp(OMe)-fluoromethyl ketone (z-VAD-fmk) was purchased from R&D Systems, and cycloheximide (CHX) was obtained from Sigma-Aldrich. Human primary breast tumor samples were obtained from the Centre Léon Bérard in agreement with the hospital’s bioethical protocols. Single-cell suspensions were obtained after digestion with collagenase A (Sigma-Aldrich) and enrichment in human epithelial Ag (HEA)-positive cells using HEA microbeads (Miltenyi Biotec) according to the manufacturer’s instructions. The final single-cell suspension contained >80% HEA-positive cells and <2% CD45+ hemopoietic contaminants. Cell recovery after treatment with TLR ligands was measured by crystal violet staining (Sigma-Aldrich). Cells were plated at 104 cells/well in 96-well plates, and after 72-h culture with or without TLR ligand, cells were washed with PBS, fixed in 6% formaldehyde (Sigma-Aldrich) for 20 min, washed twice, and stained with 0.1% crystal violet for 10 min. After washes and incubation in 1% SDS for 1 h, absorbance was read at 605 nm on a Vmax plate reader (Molecular Devices). Annexin V staining was performed with an AnnexinFITC apoptosis detection kit (BD Pharmingen) according to the manufacturer’s instructions. Subdiploid cells were detected by staining with 3 μg/ml propidium iodide (PI; Molecular Probes) after overnight permeabilization in 70% ethanol. Fluorescence was analyzed by flow cytometry on a FACSCalibur (BD Biosciences) equipped with a doublet-discrimination module using CellQuest Pro software (BD Biosciences). Cama-1 cell proliferation was analyzed with the anti-BrdU FITC-conjugated Ab set (BD Pharmingen) after a 1-h pulse with 10 μg/ml BrdU (Sigma-Aldrich) according to the manufacturer’s instructions. Production of IL-6 by Cama-1 was assessed in culture supernatants with the DuoSet ELISA kit (R&D Systems) according to the manufacturer’s instructions. Cama-1 cells were lysed in 1% Nonidet P-40-containing buffer, and 20 μg of total proteins were loaded per lane on SDS-polyacrylamide gels (Invitrogen Life Technologies). Western blots (WB) were performed with standard techniques using the Abs described above. Cama-1 cells were plated in six-well plates at 3 × 105 cells/well. After overnight adherence, siRNA transfections were performed for 5 h in OptiMEM medium (Invitrogen Life Technologies) containing 3 μg/ml Lipofectamine 2000 (InvivoGen) and 100 nM siRNA. Cells were then washed and cultured for 72 h in complete medium before treatment with poly(I:C) and apoptosis analysis. The siRNA duplexes specific for TLR3, PKR, and p65 were purchased from Dharmacon as SMART-Pools. TRIF and control scrambled siRNA were purchased from the same supplier as single oligoduplexes (5′-GCUCUUGUAUCUGAAGCAC-3′ and 5′-ACUAGUUCACGAGUCACCU-3′, respectively). TLR3 and TRIF expressions were assessed by PCR (35 cycles of 1 min at 94°C, 1 min at 55°C, and 2 min at 72°C) with Taq PCR ReadyMix (Sigma-Aldrich) using the following primers: 5′-AACGATTCCTTTGCTTGGCTTC-3′ (forward)/5′-GCTTAGATCCAGAATGGTCAAG-3′(reverse) for TLR3 and 5′-ACTTCCTAGCGCCTTCGACA-3′ (forward)/5′-ATCTTCTACAGAAAGTTGGA-3′ (reverse) for TRIF. The expressions of PKR and p65 were assessed by WB as described above. Statistical significance was assessed with two-tailed Student’s t test, and results are given as the mean ± SD. To investigate the role of TLR3 agonists on tumor cells, human breast adenocarcinoma cell lines were cultured with 50 μg/ml of the dsRNA analog poly(I:C) for 72 h. Three of four cell lines tested (Cama-1, BT-483, and SW527, but not MCF-7) showed a significant decrease in cell recovery, as measured by crystal violet staining, with Cama-1 consistently exhibiting the most dramatic drop (Fig. 1⇓a). Nevertheless, the polyI:C-induced decrease in cell recovery of BT483 and SW527, although weaker than that in CAMA-1 cells, was both significant (35 and 25%, respectively, compared with <3% in controls) and highly reproducible (at least three times). The decrease in recovery of Cama-1 cells was due to apoptosis, as illustrated by annexin V staining (Fig. 1⇓b). Poly(I:C) triggered significant dose-dependent apoptosis in the Cama-1 cell line, starting at 9 h and reaching a level of 80% apoptotic cells after 30 h of treatment (Fig. 1⇓c). Cell recovery decrease was associated with an increase in subdiploid cells, as illustrated by PI staining (Fig. 1⇓d). When added at 50 μg/ml to Cama-1 cell culture, the dsRNA analog polyadenylic-polyuridylic acid (poly(A:U)) triggered similar cell death, although with slower kinetics (Fig. 1⇓e). Importantly, one of two human primary breast tumor cell samples exposed for 48 h to 50 μg/ml poly(I:C) underwent a 2-fold increase in apoptosis, as illustrated by PI (Fig. 1⇓f) and annexin (data not shown) staining. A nonspecific proapoptotic effect of our TLR3 agonists preparations was excluded by the absence of toxicity in MCF-7 breast cancer cells (Fig. 1⇓a) and in four non-small cell lung and colon cancer cell lines as well as in TLR3-transfected 293 cells (data not shown). Collectively, these results demonstrate that TLR3 agonists are able to directly and in a dose-dependent manner trigger the apoptosis of breast tumor cells. Synthetic dsRNA induces TLR3- and TRIF-dependent apoptosis of human breast tumor cells. a, Breast tumor cell recovery after culture with poly(I:C) is expressed as a percentage, with cultures in medium alone considered 100%. The data shown were obtained from three independent experiments conducted in triplicate. The star indicates a statistical difference from respective controls (p < 0.05). b, Cama-1 cells were cultured for 24 h without (gray) or with (white) poly(I:C), and apoptosis was measured by annexin V staining. c, Cama-1 cells were cultured without (▵) or with increasing doses of poly(I:C) (□, 0.5 ng/ml; ○, 5 ng/ml; ▴, 50 ng/ml; ▪, 500 ng/ml; •, 5 μg/ml), and the percentage of annexin V-positive apoptotic cells was measured at the indicated time points. Data shown are representative of three independent experiments with similar results. d, Cama-1 cells were cultured without (PBS) or with poly(I:C), and DNA content was measured by PI staining. The percentage indicates the proportion of subdiploid cells in cultures. e, Cama-1 cells were cultured for 24 or 48 h without (▦) or with (▪) poly(A:U). Annexin V-positive apoptotic cells are expressed as a percentage of the total cells in culture. Data shown were obtained from three independent experiments. f, Freshly recovered breast tumor cells were cultured in medium without (PBS) or with poly(I:C), and cell DNA content was measured by PI staining. Percentages indicate the proportion of cells with low DNA content (subdiploid cells), i.e., apoptotic cells. PKR can be triggered by synthetic transfected dsRNA (21), whereas TLR3 can be triggered by exogenous poly(I:C) (22). To determine whether PKR or TLR3 was involved in dsRNA-induced Cama-1 cells apoptosis, the expression of each protein was efficiently suppressed through transfection of specific siRNAs (Fig. 2⇓a). Interestingly, although TLR3 mRNA was not readily detected in the steady state in either Cama-1 cells or the other cell lines studied, and the level of TLR3 mRNA, as evaluated by PCR, was not directly linked to the apoptotic response to poly(I:C) in the four lines analyzed (Fig. 2⇓a and data not shown), poly(I:C) treatment induced strong TLR3 mRNA up-regulation in Cama-1 cells (Fig. 2⇓a, left panel). Suppression of TLR3 with specific siRNA virtually abrogated poly(I:C)-induced apoptosis, whereas cell death occurred normally in the almost complete absence of PKR (Fig. 2⇓b). The serine/threonine protein kinase inhibitor 2-aminopurine had no effect on poly(I:C)-induced apoptosis (data not shown), confirming the lack of PKR involvement. Although the involvement of MyD88 in TLR3 signaling remains controversial, TRIF is the critical adaptor protein for TLR3 from which signaling diverges. On the one hand, the recruitment of TRAF6 and receptor-interacting protein 1 by TRIF leads to the activation of NF-κB, JNK, and p38. In contrast, recruitment and activation of TRAF family member-associated NF-κB activator binding kinase drive the nuclear translocation of IRF-3 and the production of type I IFN (1, 23, 24). Accordingly, suppression of TRIF, but not MyD88, with specific siRNA significantly reduced poly(I:C)-induced apoptosis of Cama-1 cells (Fig. 2⇓b). Double-stranded RNA not only induced apoptosis, but also blocked the proliferation of Cama-1 cells, as measured by BrdU incorporation (Fig. 2⇓c). The siRNA experiments showed that, like apoptosis, the cytostatic effect of poly(I:C) was mediated by TLR3, but was independent of PKR. Of note, inhibition of TRIF or MyD88 expression by itself decreased BrdU incorporation (Fig. 2⇓c), suggesting a role for these molecules in Cama-1 cell proliferation. Taken together, these data demonstrate that synthetic dsRNA both induces the apoptosis and blocks the proliferation of breast cancer Cama-1 cells in a TLR3- and TRIF-dependent manner, which involves neither PKR nor MyD88. Poly(I:C) induces TLR3- and TRIF-dependent, but PKR- and MyD88-independent, apoptosis of human breast tumor cells. a, Cama-1 cells were collected after siRNA transfection and culture without or with poly(I:C). RNA was PCR amplified for TLR3, TRIF, and MyD88 expression assessment, whereas PKR protein expression was analyzed by WB on cell lysates. β-Actin RNA and β-tubulin protein were used as loading controls. b, Cama-1 cells transfected with the indicated siRNA or with scrambled control duplex (scr) were cultured 24 h without (▦) or with (▪) poly(I:C). Annexin V-positive apoptotic cells are expressed as a percentage of the total cells in culture. Data shown were obtained from three independent experiments. The star indicates a statistical difference from respective controls (p = 0.001). c, Cama-1 cells were pulsed for 1 h with BrdU after siRNA transfection and subsequent 24-h culture without (▦) or with (▪) 5 μg/ml poly(I:C), and cellular proliferation was analyzed as described in Materials and Methods. Proliferating cells are expressed as a percentage of the total cells in culture. Data shown were obtained from three independent experiments, and the star indicates statistical difference from controls (p < 0.02). Because TRIF adapter is known to mediate the type I IFN response of TLR3 (23), the role of type I IFN in TLR3- and TRIF-mediated apoptosis was evaluated. IFN-β production was strongly induced upon poly(I:C) treatment, and Stat1 phosphorylation was observed, indicative of type I IFN signaling (Fig. 3⇓a). Of note, the very sensitive detection of Stat1 phosphorylation was maximum after 6 h of poly(I:C) treatment, when IFN-β production was still hardly detectable by WB. Neutralization of type I IFNR with specific mAb significantly reduced poly(I:C)-induced apoptosis (Fig. 3⇓b), demonstrating that type I IFNs were necessary for TLR3-mediated cell death. However, treatment of Cama-1 cells with a mixture of IFN-α and IFN-β did not induce apoptosis (Fig. 3⇓b), whereas it sensitized other breast cancer cells to apoptosis, thereby demonstrating its biological activity (B. Salaun and S. J. Lebecque, manuscript in preparation). These results establish that type I IFN signaling is required for TLR3-triggered cytotoxicity, although it is insufficient to induce cell death by itself. Therefore, type I IFN- and additional TLR3-triggered signaling pathways appear to cooperate to trigger Cama-1 cells apoptosis. TLR3-induced apoptosis requires type I IFN. a, IFN-β and Stat-1 protein levels and Stat1 phosphorylation measured by WB in lysates of Cama-1 cells cultured with poly(I:C) for the indicated time periods. β-Tubulin is shown as a loading control. b, Cama-1 cells untreated or preincubated for 1 h with either neutralizing IFN type I receptor mAb (anti-IFN R1) or isotype control (mIgG1) were cultured without (▦) or with (▪) poly(I:C) or a mixture of IFN-α and IFN-β (□). Annexin V-positive apoptotic cells are expressed as a percentage of the total cells in culture. Data shown were obtained from two independent experiments. The star indicates a statistical difference from respective controls (p < 0.02). Beside type I IFN production, TLR3 has also been shown to trigger TRIF-mediated NF-κB activation (12). IRAKs are central to TLR signaling and are known to induce I-κB degradation through TRAF6 recruitment and subsequent activation of the I-κB kinase complex (2). However, the roles of IRAK-4 and TRAF6 in TLR3 signaling remain unclear. The siRNAs specific for each molecule efficiently suppressed the expression of the respective protein in Cama-1 cells (Fig. 4⇓a). Double-stranded RNA-induced IL-6 secretion, which is mediated by TLR3 (Fig. 4⇓b), was significantly reduced in the absence of either IRAK-4 or TRAF6 expression. Unexpectedly, IRAK-4, but not TRAF6, suppression prevented poly(I:C)-triggered and TLR3-mediated apoptosis (Fig. 4⇓c). Incidentally, the very low secretion of IL-6 by Cama-1 cells not exposed to poly(I:C) indicated that siRNA did not significantly activate TLR3. Taken together, these results indicate that both IRAK-4 and TRAF6 participate in the endogenous TLR3 signaling in Cama-1 cells and reveal an unsuspected pathway, downstream of TLR3, that involves IRAK-4, but not TRAF6, and leads to cell death. TLR3-induced apoptosis requires IRAK-4, but not TRAF6, whereas both IRAK-4 and TRAF6 are involved in TLR3-mediated IL-6 secretion in Cama-1 cells. a, Cama-1 cells were collected after transfection with siRNA specific to either IRAK-4 or TRAF6, and the expression of the corresponding protein was assessed by WB. β-Tubulin is shown as a loading control. n.s., nonspecific band. b, IL-6 secretion by Cama-1 cells was measured in culture supernatants after siRNA transfection and subsequent 24-h culture without (▦) or with 5 μg/ml poly(I:C). Data shown were obtained from two independent experiments. The star indicates a statistical difference from respective controls (p < 0.01). c, After similar treatment as in b, apoptosis induction by poly(I:C) was analyzed by annexin V staining and is expressed as a percentage of the apoptotic cells in the culture. Data shown were obtained from three independent experiments. The star indicates a statistical difference from respective controls (p < 0.01). An autocrine effect of TNF-α has previously been implicated in the apoptotic activity of TLR4 ligand in human alveolar macrophages (25). This cytokine plays no role in TLR3-mediated apoptosis, because neutralizing anti-TNF-α Ab, which protects Cama-1 cells from TNF-α-induced apoptosis, has no effect on poly(I:C)-triggered cell death (Fig. 5⇓a). The general transcriptional inhibitor CHX is known to sensitize cells to TNF-α-induced apoptosis by blocking the NF-κB-controlled survival program (26). As expected, pretreatment with CHX significantly sensitized Cama-1 cells to TNF-α-induced cytotoxicity (Fig. 5⇓a). In contrast, it partially protected the cells against poly(I:C)-triggered apoptosis, confirming that different mechanisms were triggered by these two proapoptotic stimuli. Indeed, inhibition of NF-κB p65 expression by specific siRNA (Fig. 5⇓b) protected Cama-1 cells against poly(I:C)-induced apoptosis (Fig. 5⇓c). Collectively, these results demonstrate that TNF-α secretion is not responsible for poly(I:C)-induced apoptosis and establish a proapoptotic role of NF-κB in TLR3-mediated apoptosis that contrasts with its antiapoptotic function upon TNF treatment. TLR3-triggered Cama-1 apoptosis is independent of TNF-α, but requires protein synthesis and NF-κB. a, Cama-1 cells, untreated or preincubated with either neutralizing anti TNF-α mAb or CHX, were cultured without (▦) or with poly(I:C) (▪) or TNF-α (□). Annexin V-positive apoptotic cells are expressed as a percentage of the total cells in culture. The data shown were obtained from two independent experiments. The star indicates a statistical difference from respective controls (p < 0.01). b, Cama-1 cells were collected after siRNA transfection, and NF-κB p65 protein expression was determined by WB. β-Tubulin is shown as the loading control. c, Cama-1 cells, either mock-transfected or transfected with siRNA specific for p65 or scrambled control duplex (scr), were cultured without (▦) or with (▪) poly(I:C). Results are expressed as the percentage of annexin-positive apoptotic cells in culture. Data shown were obtained from three independent experiments. The star indicates a statistical difference from mock-transfected controls (p < 0.005). The role of caspases in poly(I:C)-induced cell death was analyzed. The broad caspase inhibitor, z-VAD-fmk, which inhibited TNF-α-induced cell death, also greatly reduced poly(I:C)-triggered apoptosis, suggesting a major role for caspases in TLR3-mediated cytotoxicity (Fig. 6⇓a). Poly(ADP-ribose) polymerase (PARP) cleavage, a hallmark of caspase-dependent apoptosis, occurred in Cama-1 cells upon poly(I:C) treatment (Fig. 6⇓b, top panel), confirming the involvement of caspases in TLR3-mediated apoptosis. Caspase 3 was indeed activated upon poly(I:C) treatment, as demonstrated by WB analysis (Fig. 6⇓b, middle panel). Interestingly, caspase 8 also was activated by poly(I:C) (Fig. 6⇓b, lower panel), reminiscent of the apoptosis triggered by TRIF overexpression (10), and the caspase 8-specific inhibitor z-IETD prevented the apoptosis (data not shown). TLR3-mediated apoptosis is dependent on extrinsic caspase activation. a, Cama-1 cells were preincubated with the general caspase inhibitor z-VAD-fmk or DMSO (used as control) before culture without (▦) or with (▪) poly(I:C) or TNF-α (□). Results are expressed as a percentage of the annexin-positive apoptotic cells in culture. Data shown were obtained from three independent experiments. The star indicates a statistical difference from respective controls (p < 0.0001). b, Lysates from Cama-1 cells, obtained as described in a, were analyzed by WB with mAb specific for poly(ADP-ribose) polymerase (PARP; top panel), caspase 3 (middle panel), and caspase 8 (lower panel). F.L., full length; p85, cleaved PARP; p17 and p19, cleaved caspase 3; p43/41 and p18, cleaved caspase 8. β-Tubulin is shown as a loading control. c, Cama-1 cells cultured without (▦) or with poly(I:C) (□) were incubated with 3,3′-dihexyloxacarbocyamine iodide (3 ), and accumulation of the dye that depends on mitochondrial transmembrane potential was determined by flow cytometry. d, Bax protein level measured by WB in lysates of Cama-1 cells cultured with poly(I:C) for the indicated time periods. β-Tubulin is shown as the loading control. The low levels of activated caspases 3 and 8 still present after z-VAD-fmk pretreatment and poly(I:C) stimulation may be responsible for the residual apoptosis observed by annexin staining, although the involvement of a caspase-independent apoptotic pathway remains a possibility. Caspase 9 activation could not be detected (data not shown), although poly(I:C) triggered a sharp decrease in mitochondrial membrane potential, as measured by 3,3′-dihexyloxacarbocyamine iodide (3) staining (Fig. 6⇑c), and a clear up-regulation of the proapoptotic Bax protein (Fig. 6⇑d). Taken together, these results demonstrate the dominant role of the extrinsic apoptotic pathway (shared with death receptors such as TNFR, Fas, and TRAIL) in poly(I:C)-triggered apoptosis, although some participation of the intrinsic pathway could not be completely excluded (27). Although involvement of TLR3 in apoptosis has recently been suggested (9, 10, 28), direct demonstration of the participation of this receptor in cancer cell apoptosis is lacking. The present work demonstrates the role of TLR3 in triggering breast cancer cell apoptosis via the adaptor TRIF, independently of PKR and MyD88. In addition to TLR3 and PKR, the RNA helicase retinoic acid-inducible gene 1 (RIG-1) was recently described to initiate a cellular response to dsRNA (29). However, TLR3 and RIG-1 are reported to trigger nonoverlapping signaling pathways. Therefore, given the almost complete protection provided by either TLR3 or TRIF siRNAs in Cama-1 cells, it is unlikely that RIG-1 plays an important role in dsRNA-induced apoptosis. Molecular events involved in cell death induced by TLR3 agonists include the production of type I IFN, which is required, but not sufficient, for apoptosis. NF-κB p65 and extrinsic caspases are activated by TLR3 engagement and are also necessary for TLR3-mediated apoptosis. Regarding the signaling pathway, we demonstrate in this study that IRAK-4 and TRAF6 are involved in TLR3-triggered IL-6 production by Cama-1 cells. Although transfection-based studies have excluded IRAK-4 from TLR3-triggered signaling cascade (30), our data are in agreement with reports demonstrating that the lack of IRAK-4 expression deeply affects the response to dsRNA in both mice (31) and humans (32). Poly(I:C)-induced cell death also reveals a pathway downstream of TLR3 that signals through IRAK-4 even in the absence of TRAF6. Similar to our findings, a branching point downstream of IRAK kinases has been described for TLR4, where proapoptotic and NF-κB signalizations were shown to diverge after IRAK-1 activation (33). However, several steps along the proapoptotic signaling pathway induced by TLR3 remain to be clarified. Indeed, it is unclear whether the early recognition of dsRNA is mediated by the low level of TLR3 expressed on resting cells or by another receptor. Elucidating the exact contribution of IFNRI signaling (known to activate the extrinsic caspases (34)) and analyzing the putative roles of proteins such as TBK1, IRF-3, and RIP1, which all participate in TLR3 signaling (35) will also require additional investigations. Type I IFNs involvement is reminiscent of the toxicity of the combination of dsRNA and type I IFNs for many cell types (36) and of the essential role these cytokines play in PKR-independent, virus-induced, apoptotic cell death (37). Regarding the mechanisms of action, the partial inhibition of dsRNA-induced apoptosis by the protein synthesis general inhibitor CHX shows that type I IFNs do not participate in TLR3-triggered cell death simply by down-regulating protein synthesis through PKR-induced phosphorylation of eukaryotic initiation factor2α. Alternatively, type I IFNs can facilitate apoptosis in various cell types by up-regulating the expression of proteins directly involved in cell death, including caspases (38), TRAIL, and p53 (39). Furthermore, IFN-α induces the expression of multiple genes that increase and accelerate the response to dsRNA, including PKR, 2′5′-oligoadenylate synthetase, IRF-3, and TLR3 (15). Lastly, in contrasts with its survival role after TLR2 (40) and TLR4 (7) triggering, NF-κB appears to be required for TLR3-induced apoptosis. It remains to be established whether the p65 subunit of NF-κB is involved in the up-regulation of TLR3 or IFN-I expression or in other pathways that link TLR3 triggering to apoptosis. Finally, not every breast cancer cell line we tested was killed by poly(I:C), and there was no simple correlation between TLR3 expression in the resting state and poly(I:C)-induced apoptosis in the breast cancer cell lines tested in vitro. Defects in the cellular apoptotic machinery may explain the resistance to TLR3 agonists of cells such as MCF-7, which lack functional caspase 3 (41). Alternatively, differences in subcellular localization of the receptor or the ability to produce and/or respond to type I IFN after TLR3 stimulation may account for the variation in sensitivity observed in vitro. Both poly(I:C) and poly(A:U) have been used with moderate success as adjuvant therapy in clinical trials for different types of cancer, including adenocarcinomas of the breast (42). Although the initial goal had been to trigger an innate immune response against cancer cells, the above data suggest that TLR3 agonists might have a direct proapoptotic effect on tumor cells. Indeed, retrospective immunostaining of breast tumor biopsies has shown that only patients with TLR3+ breast cancer had a prolonged survival after receiving poly(A:U) vs placebo (43, 44). Those results support a direct effect of TLR3 agonist on cancer cells that is compatible with our in vitro data and that, in contrast to other reports of TLR-triggered apoptosis, does not require simultaneous inhibition of transcription, translation, or proteasomal degradation (9, 28, 33). Importantly, although we could not obtain primary normal breast cells for in vitro study, the lack of breast side effects in patients receiving TLR3 agonist after surgical removal of their tumor (16) is encouraging considering the possible toxicity of such treatment on nontransformed breast epithelial cells. To conclude, the present data open a new range of therapeutic applications for TLR3 agonists as cytotoxic agents in selected cancers and raise the exciting concept of multifunctional adjuvants that are able to both directly kill the tumor and enhance the host’s immune response against it. We gratefully thank Jean-Yves Blay and Christine Caux-Ménétrier for providing fresh tumor samples, Jean-Jacques Pin for invaluable technical help, and Sem Saeland and Blandine de Saint-Vis for critical reading of the manuscript. ↵1 B.S. and I.C. were supported by a Foundation Marcel Mérieux fellowship. ↵2 B.S. and I.C. contributed equally. ↵4 Abbreviations used in this paper: IRF, IFN regulatory factor; CHX, cycloheximide; HEA, human epithelial Ag; IRAK, IL-1R-associated kinase; PARP, poly(ADP-ribose)polymerase; PI, propidium iodide; PKR, dsRNA-dependent protein kinase; poly(A:U), polyadenylic-polyuridylic acid; poly(I:C), polyriboinosinic-polyribocytidylic acid; RIG-1, retinoic acid-inducible gene 1; siRNA, small interfering RNA; TRAF6, TNFR-associated factor 6; TRIF, Toll/IL-1R domain-containing adapter inducing IFN-β; WB, Western blot; z-VAD-fmk, z-Val-Ala-Asp(OMe)-fluoromethyl ketone. Everett, H., G. McFadden. 1999. Apoptosis: an innate immune response to virus infection. Trends Microbiol. 7: 160-165. Hsu, L. C., J. M. Park, K. Zhang, J. L. Luo, S. Maeda, R. J. Kaufman, L. Eckmann, D. G. Guiney, M. Karin. 2004. The protein kinase PKR is required for macrophage apoptosis after activation of Toll-like receptor 4. Nature 428: 341-345. Kaiser, W. J., M. K. Offermann. 2005. Apoptosis induced by the Toll-like receptor adaptor TRIF is dependent on its receptor interacting protein homotypic interaction motif. J. Immunol. 174: 4942-4952. Han, K. J., X. Su, L. G. Xu, L. H. Bin, J. Zhang, H. B. Shu. 2004. Mechanisms of TRIF-induced ISRE and NF-κB activation and apoptosis pathways. J. Biol. Chem. 279: 15652-15661. Gil, J., J. Alcami, M. Esteban. 1999. Induction of apoptosis by double-stranded-RNA-dependent protein kinase (PKR) involves the α subunit of eukaryotic translation initiation factor 2 and NF-κB. Mol. Cell. Biol. 19: 4653-4663. Alexopoulou, L., A.C. Holt, R. Medzhitov, R. A. Flavell. 2001. Recognition of double-stranded RNA and activation of NF-κB by Toll-like receptor 3. Nature 413: 732-738. Scarim, A. L., M. Arnush, L. A. Blair, J. Concepcion, M. R. Heitmeier, D. Scheuner, R. J. Kaufman, J. Ryerse, R. M. Buller, J. A. Corbett. 2001. Mechanisms of β-cell death in response to double-stranded (ds) RNA and interferon-γ: dsRNA-dependent protein kinase apoptosis and nitric oxide-dependent necrosis. Am. J. Pathol. 159: 273-283. Robbins, M. A., L. Maksumova, E. Pocock, J. K. Chantler. 2003. Nuclear factor-κB translocation mediates double-stranded ribonucleic acid-induced NIT-1 β-cell apoptosis and up-regulates caspase-12 and tumor necrosis factor receptor-associated ligand (TRAIL). Endocrinology 144: 4616-4625. Kaiser, W. J., J. L. Kaufman, M. K. Offermann. 2004. IFN-α sensitizes human umbilical vein endothelial cells to apoptosis induced by double-stranded RNA. J. Immunol. 172: 1699-1710. Lacour, J., F. Lacour, A. Spira, M. Michelson, J. Y. Petit, G. Delage, D. Sarrazin, G. Contesso, J. Viguier. 1980. Adjuvant treatment with polyadenylic-polyuridylic acid (Polya.Polyu) in operable breast cancer. Lancet 2: 161-164. Khan, A. L., S. D. Heys, O. Eremin. 1995. Synthetic polyribonucleotides: current role and potential use in oncological practice. Eur. J. Surg. Oncol. 21: 224-227. Salem, M. L., A. N. Kadima, D. J. Cole, W. E. Gillanders. 2005. Defining the antigen-specific T-cell response to vaccination and poly(I:C)/TLR3 signaling: evidence of enhanced primary and memory CD8 T-cell responses and antitumor immunity. J. Immunother. 28: 220-228. Fujita, H., A. Asahina, H. Mitsui, K. Tamaki. 2004. Langerhans cells exhibit low responsiveness to double-stranded RNA. Biochem. Biophys. Res. Commun. 319: 832-839. Matsumoto, M., S. Kikkawa, M. Kohase, K. Miyake, T. Seya. 2002. Establishment of a monoclonal antibody against human Toll-like receptor 3 that blocks double-stranded RNA-mediated signaling. Biochem. Biophys. Res. Commun. 293: 1364-1369. Yamamoto, M., S. Sato, H. Hemmi, K. Hoshino, T. Kaisho, H. Sanjo, O. Takeuchi, M. Sugiyama, M. Okabe, K. Takeda, et al 2003. Role of adaptor TRIF in the MyD88-independent Toll-like receptor signaling pathway. Science 301: 640-643. Means, T. K., B. W. Jones, A. B. Schromm, B. A. Shurtleff, J. A. Smith, J. Keane, D. T. Golenbock, S. N. Vogel, M. J. Fenton. 2001. Differential effects of a Toll-like receptor antagonist on Mycobacterium tuberculosis-induced macrophage responses. J. Immunol. 166: 4074-4082. Micheau, O., J. Tschopp. 2003. Induction of TNF receptor I-mediated apoptosis via two sequential signaling complexes. Cell 114: 181-190. Hengartner, M. O.. 2000. The biochemistry of apoptosis. Nature 407: 770-776. Sun, Y., D. W. Leaman. 2004. Ectopic expression of Toll-like receptor-3 (TLR-3) overcomes the double-stranded RNA (dsRNA) signaling defects of P2.1 cells. J. Interferon Cytokine Res. 24: 350-361. Jiang, Z., M. Zamanian-Daryoush, H. Nie, A. M. Silva, B. R. Williams, X. Li. 2003. Poly(I-C)-induced Toll-like receptor 3 (TLR3)-mediated activation of NFκB and MAP kinase is through an interleukin-1 receptor-associated kinase (IRAK)-independent pathway employing the signaling components TLR3-TRAF6-TAK1-TAB2-PKR. J. Biol. Chem. 278: 16713-16719. Suzuki, N., S. Suzuki, G. S. Duncan, D. G. Millar, T. Wada, C. Mirtsos, H. Takada, A. Wakeham, A. Itie, S. Li, et al 2002. Severe impairment of interleukin-1 and Toll-like receptor signalling in mice lacking IRAK-4. Nature 416: 750-756. Picard, C., A. Puel, M. Bonnet, C. L. Ku, J. Bustamante, K. Yang, C. Soudais, S. Dupuis, J. Feinberg, C. Fieschi, et al 2003. Pyogenic bacterial infections in humans with IRAK-4 deficiency. Science 299: 2076-2079. Bannerman, D. D., J. C. Tupper, R. D. Erwert, R. K. Winn, J. M. Harlan. 2002. Divergence of bacterial lipopolysaccharide pro-apoptotic signaling downstream of IRAK-1. J. Biol. Chem. 277: 8048-8053. Balachandran, S., P. C. Roberts, T. Kipperman, K. N. Bhalla, R. W. Compans, D. R. Archer, G. N. Barber. 2000. α/β Interferons potentiate virus-induced apoptosis through activation of the FADD/caspase-8 death signaling pathway. J. Virol. 74: 1513-1523. Stewart, W. E., Jr, E. De Clercq, A. Billiau, J. Desmyter, P. De Somer. 1972. Increased susceptibility of cells treated with interferon to the toxicity of polyriboinosinic-polyribocytidylic acid. Proc. Natl. Acad. Sci. USA 69: 1851-1854. Tanaka, N., M. Sato, M. S. Lamphier, H. Nozawa, E. Oda, S. Noguchi, R. D. Schreiber, Y. Tsujimoto, T. Taniguchi. 1998. Type I interferons are essential mediators of apoptotic death in virally infected cells. Genes Cells 3: 29-37. Chin, Y. E., M. Kitagawa, K. Kuida, R. A. Flavell, X. Y. Fu. 1997. Activation of the STAT signaling pathway can cause expression of caspase 1 and apoptosis. Mol. Cell. Biol. 17: 5328-5337. Takaoka, A., S. Hayakawa, H. Yanai, D. Stoiber, H. Negishi, H. Kikuchi, S. Sasaki, K. Imai, T. Shibue, K. Honda, et al 2003. Integration of interferon-α/β signalling to p53 responses in tumour suppression and antiviral defence. Nature 424: 516-523. Janicke, R. U., M. L. Sprengart, M. R. Wati, A. G. Porter. 1998. Caspase-3 is required for DNA fragmentation and morphological changes associated with apoptosis. J. Biol. Chem. 273: 9357-9360. Laplanche, A., L. Alzieu, T. Delozier, J. Berlie, C. Veyret, P. Fargeot, M. Luboinski, J. Lacour. 2000. Polyadenylic-polyuridylic acid plus locoregional radiotherapy versus chemotherapy with CMF in operable breast cancer: a 14 year follow-up analysis of a randomized trial of the Federation Nationale des Centres de Lutte contre le Cancer (FNCLCC). Breast Cancer Res. Treat. 64: 189-191. Lacour, J., F. Lacour, B. Ducot, A. Spira, M. Michelson, J. Y. Petit, D. Sarrazin, G. Contesso. 1988. Polyadenylic-polyuridylic acid as adjuvant in the treatment of operable breast cancer: recent results. Eur. J. Surg. Oncol. 14: 311-316.
2019-04-22T16:25:39Z
http://www.jimmunol.org/content/176/8/4894?ijkey=5959f5a5e427a83c14c2eb6fbf3e66302e14cace&keytype2=tf_ipsecsha
Fairly easy with some difficult navigation. This is yet another beautiful hike in the Big River Management Area. The trails here are numerous, unmarked, and can be difficult to navigate. With that being said, it is not advisable to do this hike without a reliable map, an understanding how to read it, a sense of direction, and absolutely be sure to use GPS tracking in the case you need to back track. This hike starts from a small parking area along Burnt Swamp Road before the gate by the Capwell Mill Pond Dam. It is about three tenths of a mile from Nooseneck Hill Road. After passing the gate you will see the dam on the left. Shortly after the dam follow the narrow trail to the left. It climbs slightly uphill into a grass field before winding into the tall pines. Soon a trail comes in from the right. Stay to the left here and you will cross a bridge. The view, overlooking a tributary of the pond is quite pleasant. After the bridge the trail splits, continue straight. The trail slowly climbs uphill through a lush forest of pines. Be aware of your trail intersections for this walk. At the next trail intersection continue straight again following the main trail. You will continue to climb slightly uphill. This section of trail can be quite wet after a heavy rain. You will soon pass a stone wall. Just after the wall is a narrow path to the left. Ignore it for this hike and continue ahead. You will soon pass a second stone wall and then the trail winds a bit before coming to a large boulder at a trail intersection. This is about the one mile mark. Ignore the trail to left and continue straight on the main trail as it starts to bend to the right. Slow down and start looking for the next trail intersection about one tenth of a mile after the large boulder. As the trail starts to turn to the right by a mossy rock with a tree growing on it there is a trail on the left. It is narrow, but defined enough to be noticed. Turn left here and follow the trail as it starts downhill. Soon the trail ends at another well defined trail. There will be a white blaze on the tree at the intersection. Turn left here. In a few yards you will come to another intersection with a tree blazed white. You will want to continue straight, but first follow the trail to the right to the bridge crossing the stream called Mud Bottom Brook. The slight detour is well worth it. Take a moment here. The babbling brook drowns out all other nearby sounds and you are out in the middle of nowhere nearly a mile from any civilization. Return up the hill to the tree with the white blazes and turn right. After making the turn and following the trail you will pass a stone wall on the left. The stone wall then flanks the trail to the right for a bit before the trail starts to descend downhill leaving the stone wall behind. The trail then starts its slight bend to the left passing a boulder in the middle of the trail. The boulder is a good reference point and is just the right height to sit for a moment and take in the nature around you. From here the trail continues downhill and bending to the left. You will start getting your first glimpses of the pond through the trees on the right. Passing another stone wall the trail splits. They rejoin in a few yards where the trail splits yet again. At this split stay to the right. There is also some mountain laurel scattered around in the area. Continuing ahead the pond is still to the right through the trees and there is another stone wall on the left. The trail turns to the left crossing the stone wall and then to the right meandering to and from the pond. A trail soon comes in from the left, stay to the right and continue to the end of the trail. Turn right and you will cross the bridge overlooking the tributary of the pond once again. Just after the bridge turn right onto the trail that will lead you back to the dam and parking area. Blaze orange is required during hunting season. Map can be found at: Capwell Mill Pond (Map 1), (Map 2). Pines, Stone Walls, And The Pond. The newest addition of the Francis Carter Preserve, being the western end, acquired in 2014 offers the red blazed Narragansett Loop and River Trail. This part of the preserve is a great example of how nature can reclaim land that was once industrial. This hike starts from the parking area along Kings Factory Road just south of the Pawcatuck River. The red blaze trail meanders east along the rivers edge first passing a fenced in cemetery. The trail soon comes to an area that is sandy and rutted by dirt bikes and ATV’s. Stay to the left here and you will find the next blaze. The aptly named river trail soon runs along the Pawcatuck River once again. The trail here climbs up and down small hills before ascending gently to a large open field. From here it is important to follow the signs. Turning left, follow the red blazed Narragansett Loop. Bear in mind that this a new trail and not as defined as other established trails in the preserve. In time the trail will be well used and well defined. For now keep an eye out for the next sign. The trail continues northward for a bit before turning to the right and joining with the Grassland Trail. Here you will want to stay to the right following what is now both the Narragansett Loop and Grassland Trail to the south. The path soon turns to the left following the southern perimeter of the large meadow. Just before the woods, on the left, there is an informational board about the grasslands. Take a moment to look at it. From here, continue straight into the woods following the yellow blazed trail. Just before the hill, the red blazed Narragansett Loop turns to the right into one of the nicest stretches of trail in Rhode Island. On the left you will find the ruins of on old chimney. The trail winds below a canopy of pines and hemlocks before passing under power lines. Continuing ahead the trail follows and old stone wall before turning to the left, slightly uphill, to some large boulders left behind from the last glacier. The trail soon comes to an old cart path where you turn right continuing to follow the red blazes. The pine trees here are very dense and thick making for a well shaded pine grove. The trail soon comes to a pair a gates. After passing the gate, you will be on a an old asphalt road. The signage here indicates that this section of the Loop Trail is temporary. The road soon comes to an intersection. The roads ahead and to the left are active. Turn right onto another abandoned asphalt road. This was the entrance road of the former industrial complex from yesteryear. The road soon bears to the left and becomes a dirt road. A few hundred feet ahead is the intersection where the River Trail comes to the Narragansett Loop. Turn left here and retrace your steps back to the parking area. Hunting is allowed on this property at times. Be sure to wear blaze orange during hunting season. Map can be found at: Francis Carter West. Another short, beautiful hike in Westport. The Noquochoke Conservation Area, part of a former Boy Scout camp, offers about three quarters of a mile worth of trails through some truly impressive and tall pine groves. The property includes several stone walls and an operating well from days past. The trails, though not blazed, are well marked with signs at intersections. Map can be found at: Noquochoke. Fairly easy with some significant elevation. At the end of Williams Road is a small parking area for a couple of cars. The trail head is just to the right of the Land Trust sign. The trail winds downhill flanked by stone walls and old barbed wire fencing. Along this strip of wooded land on each side are large fields. At the end of the trail you can catch a glimpse of Stillwater Reservoir through the woods. The trail to the right leads into one of the large fields before dead ending near the property line with Hebert Health Center. The field is a good spot to watch birds circling above. The trail to the left leads further into the woods slowly winding down to a wooden bridge that crosses a beautiful cascading stream. The stream at the time of this hike was particularly high in velocity due to a recent snow melt. The trail then continues, following above the stream, into the Connors Farm Conservation Area at the blue blazed trail. A loop through Connors Farm, itself a beautiful hike, would add distance to the hike. From here retrace your steps back to the parking area at the end of Williams Road. A deer was spotted here at the property as well as chipmunks and a pair of red tailed hawks. Cascading Stream From the Footbridge. Being offered as a “wildland” that is open to the public, the Nature Conservancy and the Tiverton Land Trust has recently opened one of the newest trail systems in the State. The entrance is just beyond a garage off of Main Road. The trail follows a stone wall to a large kiosk. At the kiosk the trail turns to the left through the wall and immediately right continuing to follow the tall stone wall before bearing to the north. The trail then follows the back property lines of the neighbors for several hundred feet, passing some puddingstone boulders, before turning abruptly to the right. From here the trail follows an old cart path into the heart of the property first passing a small swampy area and over some small boardwalks. The trail soon starts its long gradual climb uphill before coming to the first trail split. The trail intersection is well signed. Stay to the left here to do the loop trail. The route retraces old trails and a link connects them to provide a loop trail in the back parts of the preserve. This loop climbs some of the higher elevations of the property. There is also an abundance of boulders along the loop. Being new, the trail is still rather primitive. It is blazed with white diamonds featuring an owl on it. Be sure to follow the blazes to stay on the trail. After completing the loop trail retrace your steps back to the first trail intersection. From here follow the Cliff Trail. It is blazed the same as the Loop Trail (white diamonds with owls). This trail winds southerly passing a small stream, dipping into a valley, and then up to a large rock outcrop that overlooks to the west. Be weary of the edge as the opposite side is a nearly straight drop down of 50 feet or more. From here retrace your steps back to the trail intersection and then down the trail you came in on. Be sure to remember to turn to the left near the neighboring properties and follow the trail to the parking area. Hunting is allowed on this property. Be sure to wear blaze orange during hunting season. Moderate, some hills, can be difficult to navigate. The New London Turnpike was once the main thoroughfare between Providence and New London. The road, nearly straight for miles, was scattered with small villages along its route. At the intersection of Congdon Mill Road was one of these small villages. As railroads and public roads were built, the once very heavily traveled toll road became nearly obsolete. Now off the beaten path, this one in particular village became a haven for gambling, prostitution, and an occasional murder earning its name Hell’s Half Acre. Today nothing remains of it except an old cellar hole here and there, if you can find them in the growth of young pine trees. For this hike, covering a large portion of the southern parts of the Big River Management Area, we started at the parking area along Congdon Mill Road just east of the Congdon River. The old dirt road leaves the parking area in a northeasterly direction. Immediately we saw a great blue heron fly overhead as we were starting our hike. After going downhill a bit the road splits. Here we turned right following a rocky trail uphill. Soon there is a spur trail to the left that leads downhill to a small pond. We checked it out and then returned to the trail we were on, continuing uphill, soon overlooking valleys below. Along the way you will come to a property marker to your left. It appears to read “RA 1885”. Ahead is a dip in the trail as it descends quickly down before climbing rapidly back uphill. There is a split in the trail here as well. Stay to the left and at the top of the hill turn to the left following the most defined trail. You will soon come to a “faint” trail intersection. Continue to follow the well defined trail here. A little further ahead is yet another trail intersection. Turn left here and stay to the left as the path widens into another well defined trail. The hardest part of the navigation is now behind you. If you have taken all the proper turns you will soon be following the top of a hill with a deep valley to your left. It was around this area we caught a glimpse of a deer leaping through the woods. At the next trail intersection we stayed to the right making our way to another intersection where we stayed to the left as the trail descends downhill towards Hells Half Acre. You will notice that the forest floor is now covered with a dense growth of young pines. When you approach the next intersection stay to the left again. Here the trail loops near the intersection. The growth of the pine trees covers what cellar holes may be here. There is no evidence of the village whatsoever along the trail. But when the late October wind kicked up every so gently, we could here the laughter of young women, drunk men, and a tavern piano playing. The trail then winds to the north soon crossing a rickety old bridge that spans a small brook. The trail then comes to another intersection. Look over your left shoulder, there should be a sign that says “Buck Run”. At the intersection stay to the left. Ahead, and unfortunely, there is evidence of humans. There is a small section of trail that is littered with trash from yesteryear. The remainder of this trail offers stone walls and an occasional boulder. Continue straight passing a trail coming in from the right and a trail that is on the left. Soon you will come to a intersection of old dirt roads. Turn left here, onto Sweet Sawmill Road, a well defined trail that you will follow straight back to the parking area. The old dirt road soon becomes flanked by stone walls and passes open fields where pheasant hunters can be found. Continuing straight you will pass an old wooden “Regulations” sign and cross a small stream once again before ending the hike at the parking area. Big River is notoriously known for its web and mazes of unmarked trails. It is highly recommended to not only obtain a map of the property but use a GPS tracking device while hiking here. This hike is fairly easy with some hills, but navigation can be difficult and one could easily get lost here. Also, this area is used by hunters. Be sure to wear blaze orange during hunting season. Map can be found at: Hell’s Half Acre (courtesy of Auntie Beak). Difficultly is determined by individual legs of the hike. Established in the early 1930’s and completed by 1936, the Narragansett Trail was one of the longest trails in the area. The original route ran from Lantern Hill in North Stonington, Connecticut to Wordens Pond in South Kingstown, Rhode Island. Today, the Narragansett Trail ends at Ashville Pond in Hopkinton, Rhode Island. Unfortunately, the trail is closed in some sections, partly temporary and partly permanent, and has become non-continuous. The temporary closure is due to clearing of land and the trail should re-open in due time. The permanent closure is on the land of the Groton Sportmans Club. The Connecticut Forest And Park Association has temporarily re-routed that section of trail along roads. Nonetheless, this trail is hands down one of the best hiking trails in Southern New England. The trail is blazed light blue in Connecticut and yellow in Rhode Island. This hike was done as a one-way hike using car-stops. Difficult to Strenuous With Some Climbing. The western most portion of the Narragansett Trail climbs over Lantern Hill just southeast of the Foxwoods Casino complex. Starting from a makeshift parking area (with no signage) along Wintechog Hill Road the light blue blazed trail immediately begins to climb the hill following an old cart-path. After a couple hundred feet the trail levels off for a bit before coming to a red blazed Lantern Hill Loop Trail. Be sure to be aware of the blue blazes of the Narragansett Trail when you approach trail intersections. You will want to follow them and not the red blazes for this hike. The Narragansett Trail then starts to steadily climb the hill once again. The inclines are quite impressive at times. The trail first overlooks the Pequot Reservation to the north and west offering views of the casino and Lantern Hill Pond below. The trail then climbs over the summit to a stunning overlook with miles and miles of sights to the east and south. Clear days will offer a view of the Atlantic Ocean to the south. It is also interesting to see the hawks and vultures soaring through the sky sometimes below you. Use extreme caution along the edges here as a fall would surely be fatal. Also here on the first day of Spring the Westerly Morris Men climb the hill for their annual sunrise dance at the summit. The hill got its name from the War of 1812 as the hill was used as a lookout. When the British were spotted approaching, barrels of tar were ignited to warn nearby residents. After spending some time at the summit continue following the blue blazed trail as it winds, at times steeply, down the hill. There is one section, that we dubbed the Lemon Squeeze, that will challenge your footing, balance, and upper body strength. The trail then traverses the south side of the hill passing through groves of mountain laurel before coming out to the North Stonington Transfer Station. Again, be sure to pay attention to blazes and turns at intersections. After the Dog Pound the trail turns to the left through the transfer station and back out to Wintechog Hill Road. At this point you have hiked 1.4 miles of the Narragansett Trail. The trail continues ahead, however it is closed (from Wintechog Hill Road to Route 2) at the moment because of logging. The View Looking East From Lantern Hill. This section of the Narragansett Trail is temporarily closed and is due to re-open in the near future. Moderate to Difficult, Long uphill section, stream crossings. The blue blazed Narragansett Trail continues from Ryder Road easterly passing a small Nature Conservancy property known as the Gladys Foster Preserve. The trail then starts a climb up Cossaduck Hill. This section of the trail can be quite difficult as there is a quick increase in elevation. There are some impressive outcrops and ledges along this stretch as you climb toward an outlook known as Cossaduck Bluffs. Some locals also call it the Yawbux Valley Overlook. The view to the south here is quite impressive. The trail then winds slightly downhill passing some stone walls and entering the Pachaug State Forest. The trail then comes to an intersection where you need to take a left and then an immediate right. Be sure to follow the blue blazes. The trail steadily continues downhill passing through pine groves and beech stands. Along this stretch we came upon wild geranium and reishi mushroom. After a steep decline we came to the first of a few major crossings of the Yawbux Brook. Be sure to look for and follow the blazes by the brook. And furthermore, prepare to get your feet wet and/or muddy after heavy rains. After a bit of rock hopping over the brook the trail gets rocky and root-bound as well before coming to the second brook crossing. This one is a pair of logs that are quite rickety. A pair of trekking poles or a hiking stick will serve you well here. Again be sure to find and follow the blazes. The trail then winds through a wet and muddy area conducive for the growth of ferns before taking a well marked right turn, through a stone wall, and over the Yawbux Brook once again via a series of large “stepping stones”. The trail then turns to the left and through a short section that is a little overgrown before coming out to a beaver pond complete with a beaver dam and beaver hut. The trail then follows the shore of the pond for a bit passing swamp azalea, wild dogwood, and lady-slippers. An osprey and several swallows were spotted above the pond. No beavers were seen, but several trees had been toppled by their signature mark. The trail then continues into the thick woods and eventually through another pine grove. The trail at times is covered in their needles. More areas of outcrops and stone walls are along the rocky trail before coming to the Wyassup Road Spur. This trail leads into the new Stewart Hill Preserve. Continuing to follow the blue blazed Narragansett Trail we soon came to the last brook crossing. This one was quite wide and rocky, but at the time somewhat dry. After a heavy rain this one may be nearly impassable. The trail soon crosses an old cart path and continues to wind through the forest flanked by more outcrops and stone walls before coming out to Wyassup Lake Road just opposite of the boat ramp and parking area for the lake. “Stepping Stones” Crossing The Yawbux Brook. Difficult, Strenuous in areas, stream crossings. The blue blazed Narragansett Trail continues from Wyassup Lake first following Wyassup Lake Road northerly a few hundred feet before veering left into the woods onto a trail just beyond a gate. The old road soon turns to the left, continue straight onto a narrower trail and be sure to follow the blue blazes. This trail becomes root bound and rocky as it passes a swampy area with a couple stream crossings. The trail soon passes the first of several stone walls before winding to a massive ledge. This is the base of High Ledge and the Narragansett Trail weaves around the left side of it to its summit. At the summit of High Ledge you can catch a glimpse of the forest to the south. Continuing, the trail then descends dramatically into a fern filled valley with a stream and massive ledges to the left. The trail then follows a ridge line that towers over the forest to the right as it winds to Ledgen Wood Road. Some of the road, an old cart path, tends to be quite rocky and makes for some difficult footing. Soon the Narragansett Trail turns left onto another old cart path and starts in a northerly direction. The trail winds downhill and narrows as it approaches a swampy area that is the headwaters of Dark Hollow Brook. You soon come to another massive ledge, and again the blue blazed trail winds up and around the left side of it. About halfway up the ledge along the trail there are openings to Bear Cave. When you reach the summit of Bullet Ledge you can take a peak to the trail down below. Be careful by the edges. From here, continue to follow the blue blazes as the trail continues to be hilly and substantially rocky. Along this stretch you will pass another large ledge to the left and several boulders, crossing into Voluntown, before coming to Coal Pit Hill Road. The trail continues ahead, crossing the road, and becomes narrower and slightly overgrown. Be sure to keep an eye for the blazes along this stretch. The trail then heads generally northeast passing stone walls and a forest floor of ferns as it winds up and down hills. As the trail turns east it starts a 150 foot descent down a narrow trail towards Myron Kinney Brook. The trail then turns south and starts climbing back uphill following the brook. Along this stretch you will see several small waterfalls and cascades as well as a couple of cairns as the trail crosses back into North Stonington. At the end of the trail, you will turn left at a stone wall onto Ledgen Wood Road once again. The gravel road heads east and soon becomes pavement entering a residential neighborhood. At the intersection continue straight onto Johnson Road. The road gently curves to the right and just before Pendleton Hill Road is a small pull-off for parking. Fairly easy. All road walking, detour of closed section of trail. The Narragansett Trail has been closed on the property of the Groton Sportsman Club and has been re-routed by The Connecticut Forest And Park Association. Though this detour is not blazed it is easy enough to follow. Starting from a small parking pull-off at the intersection of Pendleton Hill Road and Johnson Road follow Johnson Road to the northwest and then follow it to the right and back out to Pendleton Hill Road. It is advisable to face traffic for this stretch of the hike as you are now walking on a section of Route 49. Be sure to be aware of traffic. Continuing north you soon enter Voluntown and for a little over a half mile you will pass stone walls, farms, fields, and a few houses before coming to Sand Hill Road where you will turn right. This road is much quieter and offers a couple of sights. On the left is Studio Farm with its barn, wishing well, and canine greeter! The road crosses Koistenen Brook and soon starts climbing uphill flanked by post and wire guardrails and stonewalls. After cresting the hill there is a small pond on the right with lily-pads and a large field of wildflowers on the left. The road then turns to the left and immediately to the right. Here at this zigzag is the beginning of Gallup Road and a homestead with a couple small farm buildings. Be sure to continue east along Sand Hill Road for another 300 feet passing a small pond on the right. Here you will turn right onto Tom Wheeler Road and follow it four tenths of a mile passing more stone walls and corn fields. Look for a bright yellow sign on the right reading “Private Shooting Area”. Almost directly across the street from it you will see a sign for the Narragansett Trail. This is where the detour ends and the trail makes it way back into the woods. A Field Along Sand Hill Road. Difficult, Strenuous in areas, stream crossings, rock climbing. The blue blazed Narragansett Trail continues from Tom Wheeler Road heading in a northeasterly direction. The trail is fairly level at first, but rocky and muddy in areas. After crossing the first of several stream crossings the trail descends down the first rock wall into a small valley of boulders and ledges. Some of the stream crossings can be a bit challenging and almost all of them are cascading with small waterfalls. The trail then follows a long narrow outcrop for a bit and through a grove of mountain laurel before coming out to Sand Hill Road. Turn right here and follow Sand Hill Road several hundred feet to Green Fall River. Just before the road crosses the river turn left onto the trail. Follow the blue blazed trail through an area of hemlocks and soon you will come to a cairn. In this area the blue blazed trail enters into the Green Fall Gorge. The river in the gorge rushes over boulders as the narrow trail climbs up and down the narrow and steep embankments. There is a new bridge, built in the summer of 2017, that replaces a tricky river crossing in the gorge. Before the bridge, the crossing was logs that tended to be slippery. As the trail continues you soon come to the dam and waterfall of Green Fall Pond. The trail climbs up the bank on the right side of the dam to the pond. Swimming is not allowed at this end of the pond but the spot makes for a good resting location before carrying on. The Narragansett Trail now joins the Green Fall Pond trail and is blazed both blue and orange along the shore of the pond. The trail first crosses over a dike before coming to a split. Stay to the left and continue to follow the blue blazes. Soon you will come to the “Tree Bridge”, a small wooden bridge that crosses the Green Fall River that has a tree in the middle of it. Shortly thereafter, the trail splits again. Be sure to follow the blue blazes to the right to stay on the Narragansett Trail. The blue/orange blazes that continue ahead are part of the Narragansett Crossover Trail. You no longer want to follow the orange marks. After making your turn you will head east for just under a mile to the Rhode Island border. Along the way, the trail becomes challenging in areas crossing the Green Fall River again and the Peg Mill Brook. At the brook is an old sluice where the Peg Mill once stood. The water seems to vanish into the ground here. A few feet up and around the bend the water reappears trickling out of the rocks creating waterfalls. The trail then comes to a large set of outcrops with deep crevices. It is best to sit and slide down the rocks here as they tend to be quite high and sometimes slippery. The trails then crosses a small boardwalk, up another hill and soon joins the Tippecansett Trail. Stay to the right. From here the trail is now blazed blue and yellow and follows the Connecticut/Rhode Island border weaving from side to side. Along the way is a spot known as Dinosaur Caves. The Narragansett Trails traverses over the hump of the massive outcrop. The caves are below that can be accessed from a spur trail after climbing over the outcrop. After Dinosaur Caves, the trail becomes significantly easier as it winds down to Camp Yawgoog Road. Across the road from the small parking area in the large granite State Line Marker between Voluntown, Connecticut and Hopkinton, Rhode Island. The blue blazes of the Narragansett Trail ends here as you leave Connecticut. The Narragansett Trail is blazed yellow the remainder of the way into Rhode Island. Difficult, Strenuous in areas with some rock climbing. The Narragansett Trail continues from the State Line Marker, now blazed yellow, easterly along Camp Yawgoog Road for about two tenths of a mile. The trail then turns right, opposite the Hidden Lake trail-head, into Camp Yawgoog. From here the trail continues through the Boy Scout Camp along the western shore of Yawgoog Pond passing over a few small streams and areas of boulders. The stream crossings are well maintained with log and timber bridges. The Narragansett Trail along this side of the camp is part of the “Round The Pond Trail”. A green blazed trail appears on the right, continue ahead on the yellow blazed. The trail soon nears the pond where you can get a good look of the pond. Across the way you can see the beaches used by the Boy Scouts They are Sandy Beach, Medicine Bow, and Three Point. Continuing, the trail passes by Blueberry Swamp and through groves of mountain laurel before coming to Cooning Orchard. This area is where several of the camps trails intersect. The Narraganset Trail is now joined by the red blazed trail for a while. This is the “George Utter Trail” and will be blazed both yellow and red for a short section passing through more mountain laurel and rhododendrons. The red trail soon turns right to the Rim Trail and the Richmond Boulder Field. Continue to follow the yellow blazes of the Narragansett Trail and you will soon come to North Road where you will turn left. Following the road to the east for about two tenths of a mile, you will come to a small parking area for Long Pond /Ell Pond. The remainder of this hike is on the property of the Nature Conservancy, the Audubon Society, and the Department of Environmental Management. Turning right and through the small parking area, continue to follow the yellow blazes. The trail now heads into one of the states most densely populated areas of mountain laurel. In mid June, this stretch is stunningly beautiful. In fact, it is one of the most beautiful stretches of trail, not only in Rhode Island, but in New England. The hike also gets substantially more difficult, at times strenuous, from this point forward. The trail starts to climb steadily uphill, scrambling up rock outcrops and wooden stairs. The well intended “No Rock Climbing” signs tend to be a little bit humorous as parts of the trail you have no choice but to climb down and/or up rocks. With that being said, watch your step as you climb down into a small valley before scurrying back up a large outcrop. At the top of the hill there is a small area that opens up. The blaze indicates that the Narragansett Trail turns to the right. But if you have come this far, you are in the “neighborhood” of quite possibly the most beautiful sight in Rhode Island. A spur trail, that is not blazed, leads to the left. It is highly suggested to take the time and explore this trail as it leads down and then back up to a massive ledge. When you approach the wall of rock, stay to the left of it, as there is a way up the left side of it. Choose your steps carefully and exercise caution. Once to the top, take a breather and stay a while. The view is stunning as it overlooks Long Pond. A scene in the movie Moonrise Kingdom was filmed high upon this ledge. After taking in the sights retrace you steps back to the yellow blazed Narragansett Trail to continue the hike. The trail next traverses down a large, beautiful cleft that competes with some natural wonders of the mountains of Northern New England. After descending to the base the trail becomes a boardwalk that crosses a swampy area and stream that connects Ell Pond to Long Pond. At the end of the boardwalk the trail climbs the first of three quite substantial hills. Take your time here and take breaks as needed. This stretch will test your stamina and muscles. With Long Pond now to your left the trail continues to climb up and over a few hills. At times you may need to crawl or climb. The trail eventually levels out some, though hills will remain common, they are just smaller. The trail soon follows large sections of outcrop surrounded by the woods. Large boulders become prevalent with one, looking like the front of a ship, standing out. Stone walls appear on your right as Long Pond vanishes on the left and soon you come to the Canonchet Road trail head. The Narragansett Trail continues, still yellow blazed and bending to the right, making a short westerly loop before turning to the east once again. More stone walls and boulders are a common sight along this last stretch. The trail comes to a large flat outcrop where you turn to the right and then cross a rather high boardwalk. Soon you will get your first glimpses of Ashville Pond to the left. The trail turns to the right near the former Ashville Pond Beach and ends at the parking area on Stubtown Road. I would like to thank Auntie Beak for her help with the planning and logistics of this hike. It has been a pleasure to take on the Narragansett Trail with you. I look forward to tackling more long distance trails with you in the future. For more photos of this hike, please go to the Trails and Walks Facebook page.
2019-04-19T14:38:16Z
https://trailsandwalksri.wordpress.com/tag/wildlife/
Erik should have died on the battlefield. All of his fellow soldiers were killed in an ambush and sent to Valhalla by Brenna, a fierce Valkyrie. She spared Erik, which is something she shouldn't have done. That's why her mission is to track him down and correct her mistake. Erik is difficult to follow and never stays anywhere long enough for her to easily find him, but once they finally meet again Brenna knows she can't kill him. What are the consequences of failing the task she's been assigned to complete? Erik can't forget the beautiful woman he saw during the worst moment of his life. When he meets her again he's intrigued enough to risk his own safety. When Brenna proves to be incapable of taking his life, Erik's protective instincts come to the surface. He will make sure that Brenna won't be punished for not completing her mission. Together they set out on a journey to stay out of the hands of the hunter that's been sent after them to complete what Brenna couldn't do. Will they be able to save themselves and fight for their love or are they doomed from the start? Her Alpha Viking is a gripping romantic story. Erik is a strong and capable warrior. He has a kind heart and is constantly looking after others. I loved his heroic nature and couldn't wait to find out how he'd deal with the challenges that higher powers are giving him. Brenna was a Valkyrie for centuries and she never did anything wrong, but when she's with Erik she is a different person, even though giving in to her feelings results in being stripped of her powers. I loved how she keeps following her heart, she doesn't hesitate while it means the life she used to love is over because of it. Erik and Brenna find many obstacles on their way, but they persevere. They're both fiery and brave main characters, which makes fabulous reading. Sheryl Nantus has a great thrilling writing style. I flew through the pages of her action-packed story. Her Alpha Viking is a wonderful combination of fantasy and veteran life, which is something I liked a lot. I loved the mix of a creative fictional world with reality. Sheryl Nantus effortlessly switches between the two, which makes the story feel real and dynamic. Her Alpha Viking is energetic, enchanting and captivating, I loved this terrific book. If you love romance combined with fantasy and plenty of action, Her Alpha Viking would be a perfect choice. One very lucky reader of With Love for Books will receive an $25 Amazon gift card and a digital copy of Her Alpha Viking by Sheryl Nantus. Michelle Simpson is a Professional illustrator and designer based out of the Niagara Region. Michelle graduated with a BAA in Illustration from Sheridan College and now works as a freelance illustrator; she is also a concept artist for KeyFrame Digital Productions; here she creates artwork for children’s television shows. I am Canadian born and grew up in the forests of Niagara Falls Ontario. I am now a full-time freelancer and work with KeyFrame Digital Productions; where I create artwork for children’s television shows such as: Tee and Mo, and Season 2 of Ollie: The Boy Who Became What He Ate. I have also written and illustrated the children’s book Monsters In My House, and illustrated Hanukkah Harvie Vs. Santa Claus by David Michael Slater & Published by Library Tales Publishing. In my spare time I like to garden, go for forest walks, and annoy my cats Sushi and Mr.Pounce with endless amounts of love. I am heavily inspired by nature and mythical folktales from around the world. I was always a tomboy as a child, climbing trees and getting super dirty. My best childhood memories were outside under a tree canopy. My biggest artistic inspiration will always be the works of Studio Ghibli, preferably Princess Mononoke. 3) Where did you learn to design? I studied at Sheridan Collage and graduated with a BAA in Illustration. KeyFrame Digital Productions is a really fun environment because you’re surrounded by like minded people, most of them animators and concept artists. Visually it’s just a bunch of rows of computers with a nice coffee bar at the one end, the people are what make the work space so great. My home workspace consists of my computer desk for digital work, another desk for my traditional work, lots of illustrations plastered all over the walls and usually a cat trying to sleep on my keyboard or lap. 5) How do you select the fabrics you use? I don’t use much fabric in my items, I don’t have the patience for sewing, although I wish it did. I started my store to make a bit of money in College, a couple of my classmates had a store so I thought I’d give it a shot. Slowly the whole store has snowballed and transformed over the years into what it is now. Anyone can do it, it’s just a matter of putting the time and work into it. It’s the same as any kind of work, you just need to be diligent and persistent to pull through. 7) You make bookish items, where does your love for books come from? My Parents always made a point to read to me before bedtime when I was little. I loved Franklin for the story and beautiful artwork. My mom always made sure I had the most resent award winning books in my hands as I grew up as well. As I came into my teens I read a lot of Comic Books / Manga. My love isn’t only for books but the general art of storytelling. I’m a very picky reader now, I have a hard time finding books I really like. If I had to pick an all time favourite book it would probably be The Life of Pi, and of corse, the best series of all time; Harry Potter. In regards to material possessions; 99% of all my non life sustaining belongings are Art or Literature based. In regards to materials used for artwork; I’ve experimented with everything. I love trying new things. I’d say the majority of all my artwork now is either done on the computer or mixed traditionally with watercolour. When I was little, I can remember the first time I really discovered drawing. I was on the floor with a huge tin of crayons and I remember asking my mom and dad which hand I should draw with. They said which ever one I preferred, my left was the most comfortable. I remember I drew a snowman, and something clicked inside my little brain. My mind was blown that I could make anything I wanted on this piece of white paper. And that was when I knew that I wanted to do this for the rest of my life. So long story short; I’m able to support myself with my passion. I create something new on that blank piece of paper every day, which is exactly what my child self always wished for. Two very lucky readers of With Love for Books will receive a bookmark of choice from Michi Scribbles. When Anniek and Suze asked me to write a guest blog post about what made my heart beat faster my first silly thought was: “You mean besides trying my luck on a treadmill for the umpteenth time?” It usually takes only five minutes before I realize just how hard my heart is working and I find myself gasping for air, chest hurting, and with a beet red face. Then I stop the machine, collapse on whatever flat, unmoving surface is nearest, drink a bottle of water in one go, and give the treadmill a rest for at least another year. And I know everyone says it’s just a question of practice and constancy, and that with every new day it gets easier, but I’ve reached the point where I might finally accept that I’m not a runner and will never be. So to move on to less potentially physically damaging ways of jacking up that heart rate, it took me only about half a second of serious reflection to realize that this year for me it would be really easy to answer Anniek and Suze’s question. Because what made my heart beat the fastest it has ever had in its life—yeah, even faster than the treadmill for once—was to hear another heartbeat for the first time, one that wasn’t mine. Ta-dah! I’m pregnant with my first baby. Six months later, I’m not over either phase. But hearing my tiny baby’s heartbeat for the first time and seeing my husband get a little teary-eyed as we stared at the monitor has definitely been the stronger emotion for me to date. One that definitely made my heart beat faster and made me feel so alive—for the whole of five minutes, then I had to go throw up again and reverted to slug status for the next three months. And I can’t even begin to imagine what the excitement will be when we’ll finally meet him—yes, it’s a boy!—in just a few months. So if you have any advice on how to survive labor, breastfeeding, or really any suggestions for first-time moms please fire away, I’m ready to absorb all the wisdom I can before the baby makes his debut into the world. Three very lucky readers of With Love for Books will receive audiobook copies of Love Connection and I Have Never by Camilla Isley. Seven runner ups will receive an audiobook copy of I Have Never. Rosy loves living in Penmenna. She’s the headmistress of the village school and has built a great life for herself. She’s still single and isn’t looking for love after having her heart broken in the past, so when a handsome man moves into the cottage next door Rosy is determined not to fall for him. Matt is a successful gardener, working on a television show. When he sees Rosy for the first time, he knows she’s special, will he be able to win her heart? Rosy’s job is on the line. There are plans to close her beloved school. Even though it performs outstandingly, higher powers want several schools to merger. Nobody in Penmenna wants to lose this important, welcoming part of their community. Rosy needs all the help she can get and Matt offers his assistance. Will she be able to resist his charms once they’re spending a lot of time together? The Cornish Village School - Breaking the Rules is a wonderful heartwarming story. Rosy is kind, smart, capable and organized. However, she lets her past influence her present and doesn’t easily let people in. Matt likes her a lot and doesn’t give up, I loved his persistence and genuine and friendly personality. He’s handsome and sweet, which is a fabulous combination. It was fun to read about their adventures and I couldn’t wait to find out if these two adorable neighbors would have a chance at true love. I read their story in one sitting and enjoyed every single page. Kitty Wilson has a terrific sense of humor. She combines sizzling chemistry with hilarious situations and this works very well. Her story oozes charm and I was entertained from beginning to end. I loved her warm descriptive writing style and could easily picture every single scene. I enjoyed the large number of surprising twists and turns, the gorgeous atmosphere and the great fast pace of the story. The Cornish Village School - Breaking the Rules is a delightful feelgood story that made my heart melt and put a big smile on my face. If you love heartwarming stories set in small towns with plenty of humor and romance, you don't want to miss The Cornish Village School - Breaking the Rules. Kitty Wilson has lived in Cornwall for the last twenty-five years having been dragged there, against her will, as a stroppy teen. She is now remarkably grateful to her parents for their foresight and wisdom – and that her own children aren’t as hideous. She spends most of her time welded to the keyboard or hiding out at the beach and has a penchant for very loud music, equally loud dresses and romantic heroines who speak their mind. Thank you for inviting me on, I’ve always been a big fan of the blog so it’s lovely to be here. I thought I’d share with you the four things that really helped me shape the sort of book I wanted to write. I knew it would be a romantic comedy but from there I had to sit and have a think. Luckily, I didn’t have to go far to find inspiration. The first, and perhaps most obvious, inspiration is Cornwall. I have lived here for twenty-five years after being dragged here by my parents. Initially I fled back out again, furious that they had moved me (at sixteen years old) from a city to a village with only a church, a pub and a village shop; and that was the front room of someone’s house and opened on Wednesdays. However, I came back a year or two later and fell in love. And how could I not? There is a magic in the air here that infuses the whole county. The pace of life is so much slower, everyone is relaxed and no-one is rushing anywhere unless it’s to catch the tide or ride a wave. It is a county crammed full of mysticism, romance and intrigue which is why it has leant itself so beautifully to epic stories of love, and danger, like Poldark and Penmarric, Jamaica Inn and Frenchman’s Creek. It has golden beaches that stretch for miles, hidden coves, rambling moors and tangled woodlands. And I get to see it all, every single day. Cornwall is so beautiful; every glance and turn lifts your heart and makes you think of things that might have been or could be. Not only is a setting, it’s a therapy. With every writerly niggle I take myself to the beach and sit back and breathe as it resolves itself. Magical air. Following on from that is my local village. Penmenna is very much a figment of my imagination and not the village near me. However, living next to a village has taught me how community life down here is supportive, inclusive and life-affirming. Villages are a microcosm of human life and you will find all the problems and all the joys here that you will up and down the country; it doesn’t matter how pretty the setting, human emotion and responses are the same the world over. Villages across Cornwall were the physical inspiration for Penmenna but my own experiences of village life helped me shape the community in the book. It helps that they are usually full of the outrageous - there’s nothing as eye-opening as a village fete, don’t be fooled into thinking it’s all jam and a tombola. I’d tell you more, but they’d kill me! I am a parent, (my children are largely feral - so I should be careful claiming that) and have been both teaching assistant and Infant school teacher. My professional life and personal life mean that I have been both sides of the school door, experienced the politics of the playground (fierce) as well as the inner workings of the staff room (much friendlier). Schools are fascinating places once you scratch the surface a little. Teachers come in all shapes and sizes and all have a strong desire to encourage a love of learning, just usually with very different styles. I know lots of Harmony’s and Amanda’s. What used to drive me mad as a teacher, and even madder as a parent - competitive mothers thrusting their child’s high-level book band in everyone else’s face, discussing how so and so had such and such in their lunchbox, and did you hear about x’s husband - did give me a beautiful starting point for the woman who became my favourite character. Schools are often the hub of community life and have a history within a village that stretches back generations, so as well as the personalities in a school I wanted Penmenna to reflect that side of the community, often a tether for parents, children and teachers alike. Finally, everything I have ever read ever. I didn’t think as I was growing up that reading was anything more than pure escapist heaven. Little did I know that when I was an adult I would become an unintentional cuckoo and that every characterisation, every plot point would be still sitting in my head, deeply buried, and help me to write. When reading we identify what we love in a book, what we hate, how this twist works and that one doesn’t and then it feeds into our writing without us consciously aware of it. Other writers often give us permission to be bolder than we are comfortable with and light the way, their books relax and inspire and I love them all. Three very lucky readers of With Love for Books will receive a digital copy of The Cornish Village School - Breaking the Rules by Kitty Wilson. Hannah can finally breathe again when her divorce is finalized. She’s happy her abusive ex-husband can’t hurt her anymore. Having full custody of her daughter Sophie is a big relief as well. Hannah is independent at last and she’s going to make the most of her newfound freedom. Together with her friends she’s ready to make a fresh start. Will she be able to leave her devastating past behind and find the happiness she deserves so much in her future? Hannah wants to offer others in the same situation a place to stay, so she registers her home as a safe house for abused women. With her best friend Travis to protect them at night, Hannah can find closure while helping others who are going through a traumatic experience that resembles her own. Hannah and Sophie are slowly healing while receiving plenty of love and attention from the people dear to them. They have much to give as well and slowly a new normalcy begins. Will Hannah be able to let love in again when the opportunity of a lifetime presents itself? The Lullaby Sky is a fantastic story filled with love, warmth and positivity. I loved how Hannah surrounds herself with goodness and light after many years of terror. She’s found her strength and is using it to help others. She’s a wonderful sweet woman with a heart of gold. Together with her adorable little girl Sophie she finds back the fun in life and can finally enjoy herself and smile again, which is such a great theme for a story. Carolyn Brown combines this with brilliant main characters. Hannah's best friend Travis, for example, is a kindhearted endearing man and I loved his gorgeous gentle personality. For me the main characters really made this story, they are all a true delight and I enjoyed every single word of their adventures. Carolyn Brown has a fabulous enchanting writing style. I love how she manages to create a perfect fitting atmosphere for each story she writes. It works every single time. The Lullaby Sky is a very special book about precious moments, unconditional love and beautiful friendship. I enjoyed every single page of this impressive story. Carolyn Brown's books keep surprising me, they're fantastic gifts and every time I read one of her novels it feels like Christmas. I absolutely adored The Lullaby Sky, it's an amazing book and I highly recommend it. If you love beautiful stories about friendship, true love and fighting for a better future The Lullaby Sky is an absolute must-read. One very lucky reader of With Love for Books will receive a signed paperback or Kindle copy of The Lullaby Sky by Carolyn Brown (winner’s choice). Jaz is forced to make a fresh start. She was part of a loving family, but she lost everything and is heartbroken. She ends up in Sunnybrook where she gives zumba and yoga lessons. She also can be found at the Little Duck Pond Cafe on a regular basis. There she forms a close friendship with Ellie and Fen. She needs friends more than ever, but she can’t tell them the truth about her past. Through Fen, Jaz finds a job as a manor tour guide, so she has means to support herself. Can she finally have that new beginning she so desperately needs? When Jaz meets Harry she isn’t ready for more than friendship. Plus she needs to stay away from his photography skills, because she can’t be found in Sunnybrook by her ex. Harry’s cheerful character makes her curious, but wary at the same time. Will Jaz be able to let him in and find another chance at happiness? Summer at the Little Duck Pond Cafe is a fabulous feelgood story. Jaz’s relationship ends in a terrible way and being on her own is tough, especially since she misses the daughter of the man she used to love. I could easily feel her pain and completely understood her determination to keep in touch. Jaz has to find a way to make a life for herself in Sunnybrook and slowly, with the help of her new friends, she manages to see a future with light instead of just darkness. I loved seeing her grow and couldn’t wait to find out if she’d have some much needed luck and good times again. Rosie Green doesn’t need many words to tell a complete story. Summer at the Little Duck Pond Cafe is filled with charm. It’s a cosy eventful story with plenty of wonderful twists and turns. I really liked this fast-paced novella. It’s fun, heartwarming and fascinating with delightful main characters and sweet romance. If you love heartwarming stories set in small towns you will definitely like The Little Duck Pond Cafe books. The stories can best be read in their correct order. Rosie’s brand new series of novellas is centred on life in a village café. A fortune-teller once told me (when I was but a young thing of 25-ish) that I wouldn’t really ‘blossom’ until later in life. And I think she was right. My dream was always to become a published author. The ten-year-old me wanted desperately to write like my heroine, Enid Blyton, whose books I devoured with a passion. And I did try. But it was another thirty-odd years before I eventually got serious about the business of writing a book – and a good few more before I finally wrote something worth publishing. But I finally did ‘blossom’. That’s exactly how it feels. And now I keep having to pinch myself that I’m actually an author. These last few years have been the most thrilling, nerve-racking, adventure-filled and chaotic of my life. It’s been a steep learning curve but hugely rewarding, too – and there are so many lovely things about the process of writing a book that make my heart beat a little bit faster! Beginning a brand new book – now that is really exciting! Starting with a handful of characters and an idea of plot, and knowing that in just a few short weeks, these ‘stick figures’ will have filled out and become real people in my head. Finishing a book also gives me quite a buzz. A book tends to pick up pace as the finale draws near and I find I write faster, too, caught up in the exciting world of my characters. And there are plenty of moments during the writing of a book when I’ll have a ‘light bulb moment’. I love those times, when I suddenly have a great idea for a plot twist. Of course, the greatest excitement of all is when publication day rolls around, and the book you’ve been working on for months – your precious book baby – is finally released into the world. I’m certain that no matter how many books I might write in the future (and I intend to write a lot!), publication day will always make my heart beat that little bit faster! One very lucky reader of With Love for Books will receive one of these beautiful duck tea towels from Rosie Green. Enter this giveaway for a chance to win books of choice and bookish Etsy surprise gifts (winner will be asked for preferences) worth $30 each. Good luck! Tess has a quiet life. She's working at the local library and spends as much time as she can with her alcoholic father. Tess's world is being turned upside down when she spots her doppelganger walking down the street, entering a hotel. Tess decides to follow her and that is how she meets Mimi. Mimi is bolder than Tess, she isn't afraid to go after what she wants, but there are also many similarities. Mimi and Tess even share a birthday. They have so much in common that it isn't likely that their extreme likeness is just a coincidence. Who is this woman and why has she suddenly entered Tess's life? Having a lookalike doesn't mean Tess can trust Mimi though. When Tess is at the police station answering questions about a body that has been found in a local marsh she doesn't have a choice and has to share her story about Mimi. What is Mimi's involvement in this tragedy and will she be believed? According to Mimi they are identical twins. Tess hasn't seen her mother in years, so is it possible she was gone raising another daughter all this time? Will Tess be able to get to the truth before it's too late to prove her innocence? The Wrong Sister is a fantastic gripping story. Mimi is a complete mystery, which is something I absolutely loved. I was totally mesmerized by the secrets that surround her and couldn't wait to find out more about her fascinating personality. Mimi encourages Tess to step out of her comfort zone, to be bolder and more forward. They are opposites in some ways, but there are also many common characteristics. Tess is reluctant to let Mimi in, but the woman's persistence wins her over every single time. I was anxious to find out more about their connection, their past and the revelations that are being done. Each step of their journey kept me on the edge of my seat. T. E. Woods has a fabulous compelling writing style. Her vivid descriptions, skillful distribution of the exact right amount of tension and amazing presentation of a large number of interesting puzzles kept me intrigued from beginning to end. The Wrong Sister constantly surprised me, which made me eager to keep reading. The story has a great fast pace and there are plenty of terrific twists and turns that constantly piqued my curiosity. I really enjoyed reading this marvelous thrilling book and can't recommend it enough. If you love gripping stories filled with terrific secrets you should definitely read The Wrong Sister. T. E. Woods is a clinical psychologist in private practice in Madison, Wisconsin. She is the author of the Justice series and the Hush Money series. Her habit of relaxing by conjuring up any manner of diabolical murder methods and plots often finds her friends urging her to take up knitting. Three very lucky readers of With Love for Books will receive a paperback copy of The Wrong Sister by T. E. Woods. Amy is looking forward to the next chapter in her life. She's going to Chicago to study architecture, just like her father. Going to Chicago will give her a chance to see more if him. She hopes that when she's there, their bond will become stronger. However, there's a summer in her small hometown Shelby to get through first and Amy's mother has no idea what her daughter's plans for the future are yet. Amy's mother doesn't approve of Amy's father and Amy knows she isn't going to like them. Amy isn't looking forward to the long days ahead of her, but then Seth appears. Who is this mysterious guy and is it smart to form a relationship with someone when they both won't stay in town? Seth is in Shelby to make sense of his past. He wants to talk to his father to find closure, but that isn't as easy as he hoped. His father was keeping many secrets and isn't welcoming Seth with open arms. Fortunately there's Amy to offer him some distraction. Seth might not have found what he was looking for in Shelby, but he leaves the town with plenty of new experiences. Will he be able to embrace the future now that he's faced his past? My Lullaby of You is a wonderful story about the complexity of family, dreams and finding love in an unexpected place. I was intrigued by both Amy and Seth. They didn't have easy childhoods and their first introduction to adulthood is inevitably colored by their experiences. I was curious to find out how they'd deal with their pain and if they'd be able to move past it to have a brighter future. Alia Rose skillfully writes about their feelings, she doesn't spell everything out and leaves room for the reader to interpret, which is something I absolutely loved. My Lullaby of You is a story about difficult relationships, forgiveness, uncovering truths and internal struggles. I liked how Alia Rose deals with her subject matter, she never makes her story too heavy and adds the exact right amount of seriousness to each chapter. Because her writing reads easily I flew through the pages of My Lullaby of You. I was captivated, enchanted and entertained. I read this beautiful book in one sitting and highly recommend it. If you love beautiful stories about growing up and learning to love you don't want to miss My Lullaby of You. I've been writing since I fell in love with reading and now the characters in my head refuse to leave me alone. My debut novel, My Lullaby of You is out now! Three very lucky readers of With Love for Books will receive a Kindle copy of My Lullaby of You by Alia Rose. This question made me think, a lot. What does make my heart beat faster? And why does it do that? I’m assuming we’re talking positive things here, not spiders, the dark or maybe worse? So, what gets me excited? As far as I can tell there are two things. The first is hope. Hope that better is to come, success, achievement, happiness. Hope is that spark of excitement I get when I’m learning new things. Hope that I will improve. Hope that I will find happiness and contentment in something other than those I love. I get excited when I understand an aspect of story craft I have never quite understood before, or thought I had but now realise I’d been wrong. I feel excited that I may become a better storyteller; that I may one day succeed. I feel excited when I try a new craft, because of the hope of spreading my wings and being able to create something different, like my recent venture into screenwriting. It excited me because it made me feel I was learning to be both a better writer and a more diverse one. I feel my heart beat a little faster when I finally get the hang of a new move in pole dancing, my other love. And when that move has eluded me for so long, the joy is even greater. I deliberated for a long while on what this meant. Had I been wrong? Was it learning that thrilled me? Was it achievement? But not all learning thrills me. And I haven’t yet found what I would deem success. I feel a sense of achievement after I’ve cleaned the house, but that DOES NOT make my heart beat faster. No, it was hope. The hope of these things. Like the anticipation of Christmas – why I like Christmas Eve best. And the hours before you check your lottery ticket, when all things are possible. Hope is what makes my heart beat faster, hope and guinea pigs. Jake has unexpectedly inherited a title, so he's now back in England fulfilling his viscount duties. Jake wants to marry a decent woman, so he can save his daughter's reputation. They left the Far East with many secrets that aren't supposed to come out. However, an art theft that's linked to Jake's past threatens to make life difficult for his daughter. To protect her future Jake has to find out more about the thief. Lady Olivia, who's a passionate art lover, seems to have a connection to him, which Jake could use. The woman intrigues Jake, but scandal surrounds her. Should he get involved with her? When Jake needs to find a good school for his daughter, Olivia can assist him. In return Jake will help her to become independent. Olivia wants a townhouse of her own, but as a woman alone she isn't able to buy one. With Jake's involvement she can achieve one of her biggest dreams. Once they've fulfilled their parts of the bargain they plan to go their separate ways, but they keep running into each other. Is this chance or purpose and are they playing with fire to keep spending time together? Tempted by the Viscount is a wonderful romantic story. I loved that Olivia is strong and determined. She chases her dreams, isn't afraid to speak her mind and knows how to get what she wants. I admired her intelligence and sparkling personality. She constantly challenges Jake, which is exactly what he needs. Jake is used to adventure and being a new member of the ton isn't what he had in mind for his future. She keeps his life interesting, which was fabulous to witness. There's plenty of chemistry between them and I liked how they can't stay away from each other, their attraction is too strong. I loved how Sofie Darling made sparks fly, it was easy to feel this magnetism. Tempted by the Viscount is a captivating book filled with surprising twists and turns. I like reading a good story about secrets and Sofie Darling made me incredibly curious. I was spellbound by her words and enjoyed her gorgeous vivid descriptions. Her writing is energetic and compelling, which is something I really loved. Tempted by the Viscount definitely tempted me, it's a fantastic romantic book that put a big smile on my face from beginning to end. If you love historical romance you don't want to miss Tempted by the Viscount. It's the second book in the Shadows and Silk series, but can easily be read as a standalone. I would offer you a cup of Earl Grey and a scone, but, alas, the limitations of our digital age. 2) What’s the inspiration behind the Shadows and Silk series? Several years ago, I read a historical novel set in early 19th century Dejima, Japan and was fascinated by the setting. Before 1854, Japan was closed to all Western trade with the exception of the Dutch and only on the small, man-made island of Dejima located in the Bay of Nagasaki. It wasn’t long before my half-Dutch, half-English sea captain came to me, and his story began to unfold in my mind, then on paper. Tempted by the Viscount was born. 3) You write about people with a past, what’s so interesting about main characters who have many different sides? I feel like a past puts meat on a character’s bones, the kind I can sink my teeth into and savor. These are the characters I like to read and the ones I find interesting enough to write. 4) How did your love for historical romance start? In middle school, I devoured all of Jane Austen and the Bronte sisters’ novels. When I picked up my first historical romance at age fourteen (Love’s Hidden Treasure by Carol Finch), I was instantly and irrevocably hooked. I suspect many readers come to historical romance this way. 5) You combine both of your degrees in your writing, how did this writing journey start and was it always your dream to become an author? I can’t say that it was always my dream to become an author. There was just a point in my early thirties, when my boys were a little older, that it clicked that I wanted to write historical romance and that maybe I had a fresh take on the genre. So, I finished my English / History degree with a concentration on creative writing and started writing happily ever afters. 6) You love swoon-worthy heroes, which key ingredients should their personalities have? For me, and this is completely subjective, a swoon-worthy hero is protective and loyal. He’s a guy confident enough to appreciate the strengths of his lady. He never, ever lets his loved ones fall. Alyssa Cole said it best: A beta on the streets, an alpha in the sheets. 7) You find inspiration through reading, which book made you realize you wanted to write and is there a book you can read over and over again? Mary Jo Putney’s Fallen Angels series is the one that made me want to write. Not because I thought I could do better, but because she inspired me so much. I reread Black Silk by Judith Ivory every year. I love her books so much. 8) If you could travel back in time, which era would you visit and what would you like to do? Since childhood, I’ve been a little obsessed with the Romans. I would love to visit the Forum in Rome when it was operating in its full glory. 9) What does true love mean to you and what should a perfect happily ever after look like? For me, you can’t have true love without true like. Without true like, true lust can’t transform into true love. To have a perfect happily ever, I think both partners have to be comfortable with the imperfect, in both themselves and each other. There’s a saying that I like, that I think applies to happily-ever-afters: Don’t let the perfect get in the way of the good. And laughter. Lots of that. Thank you so much for having me on your lovely blog today! I really enjoyed myself. One very lucky reader of With Love for Books will receive a signed paperback OR digital copy of Three Lessons in Seduction, the first book in the Shadows and Silk Series by Sofie Darling (winner's choice). I'm Abbie. I'm an illustrator and writer based in Cornwall, England. Abbie Imagine produces prints, greeting cards and homeware, all hand designed by me. Every poster is printed to order in house on thick archival matte paper. When I'm not sketching random words in the hopes it'll become a beautiful design, I'm writing my first novel, bugging one of my animals or exploring the Cornish countryside. I have two house rabbits and a Border Collie who requires copious amounts of attention and endless beach runs. I can indeed! So, I’m Abbie, mad rabbit lady, illustrator, dog mum and country bumpkin, living in Cornwall, England (aka Poldark land). I’ve always drawn and I’ve always written. In school, whenever I was asked that question, (you know the one, “What do you want to be when you grow up?”) my answer was always, “I’m going to write books and draw the pictures.” Well, that and I wanted to be a Blue Peter presenter, but it turns out I’m far too introverted for the TV life. Today, I draw pictures for a living and I have technically written a book, so the childhood ambition is very almost realised. Books are a huge inspiration for me– the language, the imagery, the characters themselves. That and where I live. Cornwall is a very small county in the very south of England, surrounded on three out of four sides by the sea and rolling hills full of nature everywhere in-between. You’re never more than half an hour away from the coast and it’s an always beautiful, but sometimes rugged landscape. When you’re stood in an ancient village, surrounded by narrow roads that were never built for cars and lined by old fishermen cottages, with a still working harbour and old tales of smuggling, you can’t help but feel inspired. On a stormy day, with the sea crashing against the harbour walls and the sky a magnificently angry grey, it fills your mind with so many stories, so many images, so many voices from the years gone by. But then the sun comes out, and in your head plays, “Oh, I do like to be beside the seaside!” on repeat, suddenly a whole new feeling manifests and your work becomes sunshine and fishing boats and happiness. 3) Where did you learn to illustrate and design? Honestly, I never really ‘learned’, at least not formally. My grandad was a self taught artist. He would sit on the sofa, in front of the TV with any scrap of paper or card he happened to have near him– sometimes it was an old bill that had come through the post, sometimes it was a cereal box– and he’d just draw. No reference photo, no plan, he’d just pick up his pencils and on to the paper, an amazing drawing would appear. His favourite things to draw were ships, animals and football games. When he and my nan would go on holiday, as presents, they’d bring me back sketchbooks and pencils, then I would sit in the lounge with them doing my own doodles. Since then, I’ve always drawn as a hobby, but I haven’t always been very good at it– it’s just something I liked to do. It’s taken years and years, but through sheer determination, I’ve finally found my ‘style’. It’s a cliche, but it really is all just about practice. I don’t think I do anything technically correct, I have no idea what even is ‘technically correct’, but I like what I do and as it turns out, so do other people. And that’s really all that matters. A mess! Oh gosh, I’m so messy. I’d love to be one of those artists with a beautiful, Instagram worthy workspace, but right now, there isn’t even one patch of empty space on my desk. It’s piled high in greeting cards I’ve just printed for orders in my shop and their packaging, far too many printers, shortbread biscuits… but it’s organised chaos. An outsider would see it and run out of fright, but I know where everything is (well, mostly). They say it’s a creative thing, the messiness, so that’s my excuse. Oh, thank you! That truly means so much to me. It goes all the way back to childhood. From as long as I can remember, I’ve loved books. Growing up, I shared a room with my little sister and our mum would read us a bedtime story every night. She said the only problem with that was I would never let her stop. It was always, “just one more chapter. Just one more!” And once she did finally get to put the book down, my sister and I would then make up stories together. We called it ‘Dreamland’ and we’d spend all night coming up with stories using our favourite pop stars as characters. Some nights we’d get so carried away in our fictional world, it was suddenly seven a.m and time to get up for school. I genuinely have no idea how we both managed to function at school on such little sleep! 6) You use a lot of quotes, how do you select the quotes you use for your products? For the main part, I try to keep my distance from copyright problems, so I stick to books that are in the public domain. Luckily for me, a lot of those are my absolute favourites. Jane Austen had such wit, so I’m always drawn to her work. Plus, my sister is really into ancestry and she’s found a line that suggests we share a great, great, great, great, great… great? Probably a few more greats… grandad with the Austen’s, so I like to think I’m waving the flag for my fourth cousin a million times removed. Alice’s Adventures in Wonderland is another favourite because it’s so quirky, yet so inspiring all at once. There’s such a cleverness to the writing, despite being considered a children’s book. I have a bit of an obsession with beautiful books and buy up any pretty copy I can find of Peter Pan, Alice, Pride and Prejudice etc, so whenever I’m in a slump, I bring out one of the pretty books and have a flick through to find a quote that resonates. 7) You're also a writer. Could you tell us a bit more about the kind of books you write? I can! I have this thing about magic. When I write, I like to transport myself to a world that’s different. Where the things we deem impossible perhaps aren’t. Where the things I like to believe could exist are almost tangible rather than abstract and contested. I have a few works in progress, but I have finished one book. Well, in as much as it has a beginning, a middle and an end. It reads from start to finish. Whether I’m entirely done editing yet is another matter. It’s set in our modern world, but in a community of witches, with covens and mystery and families at war. It’s YA and I hope for it to be a three part series. I’m also writing one that’s a little more on the paranormal side with Victorian England pomp, a secret society and a ghost called Oswald. Out of university, I got a job as a writer. It was for a group of websites and I’d write about gaming, new technologies and gambling (although I had little to no interest in any of these things). It was a small company and I worked from home, only seeing my boss in person once every few weeks. One day, the system I used to add bits to the websites stopped working. Then my emails to my boss started bouncing back. Then I wasn’t paid. Tragically, it turned out my boss had died quite suddenly and the company was completely shut down. I was in Cornwall, with a media degree, but with no media jobs for hundreds of miles around. I started applying for any job that was remotely linked to my degree and in the meantime, I found Etsy. I decided to open a shop because why not? I could kind of draw and I had nothing to lose, but maybe I could make some pocket money to tide me over. I remember people saying to me at the time, “Yeah, but no one actually buys things like that, do they? You can’t do it as a job.” Five years later, it’s my full time career. It’s not been easy. To this day, it’s not easy, but it’s all mine, and that’s quite a lovely feeling. There’s a certain amount of pride in knowing everything you have, you’ve built yourself. Don’t get me wrong, it’s so, so difficult. There have been a few times I’ve thought about giving up. There will probably be a few more times in the future! Sales aren’t always consistent, but then sometimes, sales are absolutely bonkers. It’s a constant game of balancing and trying not to let emotions, good or bad, overwhelm you, but at the end of the day, it’s something I did myself. Completely and entirely. It’s my passion and heart and when I sit and think about the fact that thousands of people around the world have something I created in their homes… it’s mad– in a good way! Also, getting to have animal hugs at any point in the day is the best (though I think my bunnies disagree and would rather I left them alone for five minutes. They’re so cute though, it’s not my fault). To keep growing, to keep creating and to get brave enough to try to publish my book! My dream is to be able to walk into Waterstones and see it there on the shelf. Preferably with my artwork on the cover as well (that would really be hitting the ‘write the books and draw the pictures’ dream of six year old me), but I’m willing to compromise on that one! One very lucky reader of With Love for Books will receive a literature inspired print in any of the sizes offered in the shop (winner’s choice) from Abbie Smith.
2019-04-22T08:05:26Z
https://www.withloveforbooks.com/2018/08/
This Apartment is located with in walking distance of all the hot spots in Atlantic City - The beach, Beach bars, Boardwalk, Shopping outlets, Casino's and Many restaurants. If you are looking for a weekly or week end get away with your family or friends this is the perfect place! We absolutely loved the place. Three of us were in town for a car show. It was extremely close to the casinos, boardwalk, outlets and restaurants. Communication was awesome. Angelica let us know exactly how to check in, she stayed in touch with us while we were there and responded quickly. We would absolutely stay here again. My bf and I truly loved staying at Angelica's place, great location and set up. The only suggestion I have is that it gets a bit cold in the middle of the night and although we changed the thermostat the actual temperature of the apartment did not change. Still, can't wait to come back! Def a go to place to stay while in AC! This is a great alternative to an expensive hotel room! Angelica is very nice & accomodating. The apt is nice, even though it is small. Would appreciate blinds in the windows and having more the 1 key to get into the apartment. Overall, would stay here again. Absolutely loved the stay at Angelica's place! The apt was super cute and comfortable and so close to everything... walking distance to the boardwalk! Communication was great! I would recommend her place to anyone who's in the area! Angelica's place was close to the beach + casinos, she was communicable the weekend we were there. What you see is what you get. My friends and I came down for the night for an event. Three things wrong: The fridge didn't get cold, there was no hot water (well it worked for 2 seconds than you had to take a military style shower) ! There was cute little bar until. I almost fell because of one of the stools were broken. Noise wasn't a factor since we weren't in the apartment the whole time. From dipping your toes in the sand to swimming in your favorite libations, Bluegreen Atlantic Palace is the destination where you will be able to enjoy the Atlantic ocean breeze while diving into your favorite activities. Atlantic Palace is at the heart of the action in Atlantic City and the resort has various amenities available to our guests. You will feel like you are in a home away from home as the unit does not feel like a hotel room but rather a home and this is attributed to the kitchen available, the laundry facilities on site, and the various appliances that will make your stay a pleasant one like the refrigerator and microwave. There is plenty to do in Atlantic City! Shopping, dancing, trying your luck at the casinos, or live entertainment, there is something for everyone at this vibrant and energetic destination! Our stay was exceptional. The room was clean and the staff extremely friendly and helpful. Well equipped condo with an ocean views, located on the Atlantic City Boardwalk, minute to the beach.With one private Queen sized bed in a bedroom and pull-out sofa in the living room, plus an additional (Website hidden by Airbnb) is perfect for a getaway with friends or family.Fully equipped kitchen with kettle,toaster, microwave, refrigerator and an electric range(no oven). Bathroom with a double sink and jacuzzi tub. Outdoor pool closed till Memorial day. Nous arrivions tous les 3 par la gare de bus de Philadelphie. Nous avons dû tomber sur le seul taxi à ne pas connaître l'adresse car au petit matin nous nous sommes aperçu que le bulding avait 2 entrées dont une qui donnait directement sur la "promenade" qui longe, d'un côté la mer, de l'autre les casinos, hôtels, restaurants,...! Il faut en fait préciser le nom du bulding, très vite repérable. Sinon, appartement original, lumineux, très propre. Notre fille a particulièrement apprécié la baignoire à remous dans la salle de bain, et la piscine au 3eme étage. Une vue du 9ème étage qui permet de voir le contraste entre le côté plage, et de l'autre "l'envers du décor", c'est à dire derrière la "façade" de cette ville. Et en plus on profite du service de l'immeuble. Petit bémol, le canapé n'est pas un bon lit, sinon pour un poids plume. I went to AC for a getaway and we had a great time here , would definitely choose this place again if I ever go back! The condo was everything we were looking for. Easy to check in 15min walk to A.C. mall and most casinos. Close to all types of food (boardwalk and otherwise) Nice hot tub in room. Nicer Jacuzzi by the pool. Clean pool- opens at 10. Small old school game room. 3rd Fl. Bring a mattress pillow top. They provide lots of Basic stuff: -tons of towels and bathroom shampoos-lotions-conditioners -new toothbrushes and paste. -various pots and pans -dishes of all sizes -kitchen Ingredients -bunch of pillows. -pull out couch. *theres NO oven. * you'll need a pillow top for the pull out couch as well. *bed is noisy. *check out is at 12. Overall a good value for the stay but if you value your 5+hr sleep between casinos and shows as we do, come prepared with a matress pillow top or inflatable bed. Definitely worth the stay. Great location. Friendly staff. Slightly smaller than it looker in the photos. Furniture not very comfortable. Nice bathroom. Smelled a little musty, we did use the air but a window to open would have been better. Good for the price. I loved the room it’s beautiful and has a great view I’ll definitely be staying here again. Excellent location . Many hidden 4 and 5 star restaurants in the area !! We frequented many of the casinos which we were not expecting to do !! The pictures are accurate . It definitely has been upgraded !! The most pleasant surprise was that the place was immaculately clean !! (Something expected , but not always occurs ). Beach front !! Deck by the pool was huge !! We used that every day !! Alena answered my 100 questions within minutes of me asking !! Water in bath is tricky , but in 5 seconds , one can figure it out !! Great place !!!! The place was really good and the host(Alena) was really helpful. We have no complaints. Thank you so much Alena. Hey! Traveling for the warped tour and would like to share my room with a few people. Bring your own sleeping bag. $180 for Friday, Saturday, and Sunday. Avoid the traffic and stay on a room :) laid back and easy going. You can’t miss Atlantic Palace. Standing 31 stories tall right there on the Atlantic City Boardwalk, your stay at Atlantic Palace puts you in the middle of the action. This is a studio unit with unbelievable views of the world famous Atlantic City Boardwalk and ocean. Featuring contemporary décor, flat-screen TV, small kitchenette, updated bath, a double bed and a futon bed. Here, vacations are light and bright and the view is sure to dazzle with the Atlantic Ocean just across the Boardwalk. Walk at the front door and you're directly on the Boardwalk. Great place to stay. Right on the boardwalk. Amazing view of the ocean and boardwalk. Would definitely repeat! Fran’s place is fantastic. It is walkable to many of the casinos and all the fun on the boardwalk. We had a great time. The view out of the room it’s great it overlooks the beach. I would absolutely recommend this place and would definitely stay again. We had to call Fran for something and his response was swift and helpful. I was in town for the Ironman 70.3 Atlantic City. The views of the boardwalk are gorgeous. Highly recommended if you're looking to stay in AC and not a casino hotel. Awesome views!!! Boardwalk entry right from the building. I frequently visit atlantic city, and i cannot wait to return and stay here at Fran’s again! Perfect location- right on the boardwalk, and close to Bally’s and Caesars. Amazing view too! Enjoy a nice stay in this beautiful private apartment. located minutes away from the beach, boardwalk, Beach bars, tanager outlets, casinos and much more. This apartment has a full living room, full kitchen, 2 full bedrooms & a full bathroom. Master bedroom offers a large and comfortable queen size bed. Second room offers a bunk bed with an extra pull out matters which perfectly fits 3 guest. This location is prestige with in walking distance from the beach, Tanger outlet, boardwalk, casinos, great nightlife. Emplacement dans un quartier peu accueillant et isoler, sans voiture impossible! Diego's place was great! A nice place in an area that we felt safe in. This place definitely had its ups and downs. The location of the apartment is great in terms of being close to the beach, but not in terms of it being directly across from an ambulance dispatcher. Be prepared to be woken up during the night at least a dozen times. In addition the apartment was clean and tidy, but everything in the apartment is cheap, from the hanger in the bathroom to the chairs in the living room. The tv has little to no channels as well. Overall this place is over priced for what you get staying there.I wont be back for another stay in the future. I read about 4 reviews before deciding on this place. I'll do you a solid, just read mine and pick this place. Location: The furthest casino is about a 5min drive otherwise you can walk everywhere. 10-15min walk to the Boardwalk. Amazing location. Keep reading my review if you want, but somebody may book it ;) Apartment: Super clean, new appliances, tv w cable, and wifi. There's a fire department or something near by, but its not really a big deal. Diego: Pretty sure I called him at 630am and he answered lol! He suggested great places to eat, gamble and party. We went to all of then and had a blast! He rocks. Parking: free and tons of space Enjoy!! It's a good cheap place to stay. great location. Located in the heart of Atlantic City, walk to the casino and boardwalk, walk to shops and restaurants, newly renovated and clean with plenty of parking and public transportation. The space was a little tighter than expected and one of the beds was really hard, but the space was very clean and comfortable. Easy check in and out and the owner was lovely. Definitely recommend. This was a great place to stay and for a great price. We over all had no issues with checking in or finding it and the location was near to many attractions. The only thing we didn’t like was the parking situation, had to park around the block in parking garage but will choose this spot if it’s available on our next trip! Nice and clean place, located very close to boardwalk,casinos and restaurants. Good place to stay in Atlantic City. Centrally located apartment. Easy to find and check-in. Plenty of beds for five people. Kitchen amenities were a little sparse but manageable. Lots of restaurants nearby. The host was willing to help if we had any issues during our stay. Host was very helpful & the apartment was very nice. Summer apartment #6 walking distance from beach! A block from the Atlantic City Boardwalk, this high-rise condo hotel is a 4-minute walk from the casinos at Hard Rock Hotel and Ocean City Casino. Beautiful space - loved the pool and jacuzzi (the outdoor jacuzzi was open in November, which was great). Jeremy responds very quickly and is very nice. Casinos, coffee, and beach are all within 1 or 2 blocks. would definitely book here again for another trip to Atlantic City! Enjoy spectacular oceanfront views from the 17th floor of our studio condo at the Atlantic Palace Suites. Free parking is included. Ideally situated on the Boardwalk, our condo is perfect for couples, business travelers and small families who are looking for a hotel atmosphere that is centrally located without the hustle of the casinos. After a quick workout at the small gym, visit the new AQUA spa, relax by the seasonal outdoor pool & hot tub, or beach, or explore the casinos & premium outlets. This is a privately owned condo with one large room in an all-suites hotel/time-share property centrally located in the heart of Atlantic City's vibrant boardwalk community. I posted as many photos as possible to give you a sense of the size and views of our studio (approx. 440 sq ft). It's basically one big room (like a hotel room), and a separate bathroom. We ask that all guests abide by the rules of the property and maintain our little gem with the same care and cleanliness as you would your own home and leave it in the same condition as when you arrived. A set of clean linens and bath towels for two people as well as toilet paper and facial tissues are provided. All you need to bring are your beach towels and personal sundries (soap/shampoo/toothpaste, etc). The building provides wi-fi access via AT&T (for a fee). Suite has a 42" flat screen tv, an efficiency kitchen with a small fridge, microwave, coffee-maker, toaster, pots/pan, glassware, dishes and utensils. A small hair dryer is in the bathroom. A coin-operated laundromat is located on the property. There is 24-hour security and complimentary parking for one vehicle on a first come first served basis. Card key access from the boardwalk at night. There is a two night minimum for Friday or Saturday weekend stays. The condo is ideally located directly on the boardwalk across from beach, so just bring your own beach towel and chair, and you are all set. There is a coffee-shop on the main floor of the property that serves breakfast and lunch, but there are many other dining options along the boardwalk from the usual pizza, burger, gyro sandwiches to Johnny Rockets and Rainforest Cafe to buffets and finer dining in most of the casinos and on the nearby Pier (e.g. Buddakhan). Along the boardwalk, you may rent bikes (which you can only ride in the morning), play mini-golf, take a pedi-cab, go shopping, visit the casinos, have a drink on the beach at one of the beach bars, or venture to the premium outlets across from Caesar's Palace. Ripley's Believe It Or Not is just a few doors down, and the Steel Pier with the video arcade just a few blocks down the boardwalk. There is a shopping mall across from Bally's with plenty of shopping and dining options and wonderful views of the beach and boardwalk if you go to the 3rd floor and walk to the end. The lobby of the building provides brochures for nearby eateries, tours, etc. and I have a binder with some additional suggestions. For those looking to spend time in AC but not stay in the casino, this is an ideal spot. Centrally located on the boardwalk, you have easy access to all it has to offer. Check-in was quick and easy, and the room was super clean when we checked in. The parking lot is very tight, so if you’re traveling with a big car it may be tough to get a good spot, better off parking up high. And if you’re looking to go to borgata or harrahs it’s a quick Uber ride and the pick up spot is right in front of the lobby. Definitely recommend to those coming to AC and want to enjoy the boardwalk and other stuff without the hassle of the casino. Highly recommend! We had an amazing time at the unit and building. View was breathtaking, the building staff was very friendly, and the parking being included made it super stress free. The air conditioning got the place nice and cold and the view of the ocean from the bathroom is such a novelty touch. The unit was clean and exactly what I expected based on the photos shown here. We got there a few mins early and were able to check in without issues. The pool and hot tub were great and it was so nice to just go out and see the view from there on the 3rd floor. Having direct boardwalk access and being so close to everything made this worth every penny. I will definitely consider this space if it is available each time I stay in AC in the future. My boyfriend and I were originally planning to do a one day trip to Atlantic City. I was wondering by chance if it was possible to book any Airbnb for jsut a day so we could have a place to hide away from the sun, wash up, and relax. I first messaged Amelie to see if a day rate was available, but at the same time we decided to arrive same day with a request for a later check out. Among last minute changes and answering all my questions promptly, Amelie was sweet, understanding, flexible, and more than accommodating to my request. We were able to check in same day and leave with an extended rate. I'd say I hit the bigger jackpot than the casinos with my stay at Amelie's place. Now the place. I couldn't believe the location and EASE of getting to the place and EASE of enjoying the location right on the board walk! Amelie's studio is just the right size for a couple and the ocean front view is breathtaking, especially from the 17th floor. The weather was also crummy, so Amelie gave some suggestions to do during a rainy day. It was very helpful, especially when you get lazy and don't want to research things. This last minute getaway was perfect beyond words, and all credit goes to Amelie. The apartment had incredible views. It is located right at the boardwalk and is close to great restaurants and attractions. We really loved our stay. Location is great and its a cute space..a little bigger than a hotel room. Great communication and she had little tips. Very clear instructions. overall a great time. Beachfront Studio Condo w/ Spectacular Views! This high-rise all-suite hotel, less than a block from the boardwalk and 1 mile from the Atlantic City Convention Center, is also within walking distance of casinos. Atlantic Palace Studio Suite ~ Atlantic City NJ! All glass huge windows! Amazing views! Modern decor. Partial kitchen. Directly on the boardwalk! Pier & mall equal distance to resort! At the beach & need a bathroom, lunch break or rest? No problem! Just cross the boardwalk to get inside! walk the boardwalk at night! 414 sq ft studio condo, sleeps 4. One queen bed, one sofa bed. One full bath. One partial kitchen. No pets, no smoking including e-cigarettes. Call front desk at least 1-2 weeks in advance for special floor or view requests! Currently requested is high floor with ocean view! All rooms have spectacular all glass window views! Love this resort! Beach front condo! Directly on the boardwalk! Ripleys believe it or not right next door and so is Ritas! Short distance to shopping outlets. Great place and super close to the beach! Ted & Stephanie were amazing hosts! Nice little place by the boardwalk. Ted and Stephanie were quick to respond to any issues. It is a great place to stay. The owners gave us quick responses whenever we had any questions or issues and they were very friendly people. The place is very clean and nice. Ted & Stephanie were welcoming hosts, great at communicating check-in details, and cared about my group's experience. This house served our purposes well. 1 miles from the convention center, making our tradeshow there very accessible and so close to the boardwalk, 1/2 block, nice! Extra family joined us and we had room for all. Nicely stocked kitchen. So much nicer to relax with our own homelike livingroom. than a hotel. We will probably come again next year! Ted & Stephanie were amazing hosts for us during our stay. Offering local suggestion and overall great hospitality. We were located 100 feet from the boardwalk, which made getting to the beach a simple walk away. Overall a great stay and happy we chose this location. Located on Boardwalk in the center of Atlantic City’s casino district, the Fantasea Resorts at Atlantic Palace features an outdoor pool with a sun terrace and direct access to Atlantic City Beach. Each of the Fantasea Resorts at Atlantic Palace suites is complete with kitchen facilities and cable TV. The suites overlook the ocean or the city, and have well-equipped private bathrooms. Studio suites are far more spacious and cozy than typical hotel accommodations. Generally, a Basic Studio features a queen bed, pull out sofa and accommodates up to 4 people. Apartment is clean and good if you just need to crash. Bring your own dishes and silverware. Cabinets were all empty and the few dishes there were dirty. 15-20 min walk to AC attractions, and neighborhood is not very safe, so we ended up taking cabs everywhere. Neighboring apartments and common hallway/stairwell were extremely loud and made it near impossible to sleep. Host is very nice and responsive, but I feel the description misrepresents the location and convenience of the place. Excellent place, the attention was incredible besides the delicious food to prepare in the restaurant, super my expectations, highly recommended. The apartment was perfect for just the small stay me and my friends had in AC. It was just enough room for 8 of us and the hosts were super friendly. Everything was clean and we enjoyed our stay here. They also offer 10% off your meal in their resturant which by the way the food is great ! Would definitely stay here again. This is a beautiful welcoming private apartment. You will be in the center of it all! with-in walking distance to Beach, Boardwalk, Tanger outlets, Caesars / Tropicana / Bally's casino, Convention center & the new playground. Location is perfect! Spacious apartment, Comfortably fits 4 people. Great location! In the center of Atlantic city! Angelica's place is only about a block away from Caesars and a few short blocks away from the Boardwalk, a really great location! She's also across the street from a liquor store and Pho if you're into that! Overall it's a great location, and the apartment looked exactly as pictured. Only a few small things we noticed were a bit outdated /room for improvement- such as only one roll of toilet paper for 4 people stay for a full weekend, the kitchen sink and the bathtub both clogged when in use and the couch , one of the legs/support wasn't screwed into place and is just put into place to try to hold up the couch. Outside of that, it's a wonderful location and we had a good stay. Thank you Angelica! Angelica's place is beautifully located right next to the boardwalk (2 blocks) and Resturant's... and grocery stores (7-eleven is just 5 min walking distance) and entertainment (close to Ceaser and few other casinos). It's a good value for money but we unfortunately faced few issues which might not be applicable for all. A) unfortunately the weekend we were there TV and Wifi both were not available this :-(... my son was not very happy with this as we have been out on trip for long and I told him to hold on watching his much awaited series till we reach Atlantic City. B) amenities in kitchen: very little amenities available in the kitchen like 1 pan and few plates, glasses and cutlery,.. with the family with kid I was hoping to be able to prepare some good at home but limited availability of utensil made it impossible to do so. So overall if you do not care about cooking in the kitchen and if there is no issue with wifi and TV the place was excellent in terms of accessibility to the board walk or all other attractions. This apt. was just as described and in absolutely perfect in location! Literally like one block away from Caesars palace, the outlets, and the boardwalk. Super clean apartment exactly as described and Angelica was perfectly helpful before I arrived. It easily fits four people and there is actually an air mattress as well. You cannot beat value like this in Atlantic City and I would definitely recommend this place and Angelica as a wonderful host! Great stay. Really close to Caesars. Great location. 5 to 10 minute walk to everything. Very clean as well. Bring extra towels and wash clothes. Great breakfast spot near by. Very weak wifi signal but I barely used it so I didn't mind. This place was clean, new, homely feeling, top of that its in the middle of everything. I went with a group of 4, and it was very spacious and convenient. Definitely recommendable. Washer/dryer, AC, Internet, TV close to Borgata, Harrah's and Golden Nugget Casinos off White Horse Pike across from wind mills There is a problem with address. My house is on Emerson not Atlantic Ave! Venice Park is middle class neighborhood with single family homes. Mixture of African-Americans and Caucasians. There is one store at Riverside Drive and Morningside Ave. Bus 505 runs @40 minutes at corner of Morningside and Emerson Ave. Ten minutes from NJ Transit Trains and buses. Communication was great. This space definitely accommodates 10 people comfortably. The couches were plentiful and comfortable. The breakfast table was perfect for an early morning coffee. The backyard was a plus since the weather was nice. Only thing is the location is a bit far away from the casinos and most of the Atlantic city attractions. Our group frequently used cabs to get to and from our destinations. However this spot was a great value for the price. We loved staying at Joyce's house in Atlantic City. Plenty of space, clean as a whistle and comfortable, the house is only an 8 minute drive to the boardwalk, casinos and beach. We will definitely return and recommend this listing to our friends. Loved her place! She was a delight. Had a great trip! House was clean and stocked and accommodated all of us comfortably. Joyce was an amazing host! She was there right when we got to the house and greeted us with food and wine to start our weekend! The neighborhood was quiet and had plenty of parking space, with less than a 10 minute drive to the beach. Definitely will go to her again if I'm in the Atlantic City area! Joyce was available, flexible, and responsive. Her home and decor were lovely. Our group of 9 filled the space nicely and everyone felt very comfortable, although there was one bed, 2 pullouts, and 2 air mattresses not actual beds. Overall we really got to relax in the space. Joyce was great! Greeted us on time, gave us a quick run down on how to get around and what to check out in the city . House was great for a large group of friends (9) and had everything we needed. A nice cozy house with plenty of room. The bed rooms were well equipped and had an old rustic feel to them. The backyard provides for a nice experience in the summer. Joyce was amazing in her communication and helping us out with any questions or issues we had. There is a nice office space/guest room in the back with a huge sofa which was great for all of us to sit around and talk. I would definitely stay at Joyce’s the next time I am in Atlantic City. You are the 4th property from the beach! Walk or ride the BOARDWALK everywhere! 2 blocks to Tropicana! 500 feet to Bungalow Beach!! 2 blocks to Boardwalk Hall WALK to the concerts! 4 blocks to Caesars & The Playground (Pier)! Come enjoy the summer in AC! Enjoy a 55" TV with HD Cable and APPLE TV in the Living room and a 40" TV with HD Cable in the Master Bedroom! Whole home equipped with WIFI. Parking in front of house on street & parking lot across the street too! House is very near boardwalk and beach. Lots of places to eat. Plenty of room in house for group. A great place for family and kids. Host is very nice and response almost immediately to questions and request. We had a great time. Sean's place was great for our purposes! It's an ideal location, short walking distance to the Trop, Boardwalk Hall, Caesars, Ballys and the Pier. It was clean & a great place to crash after going to events and nightlife. Sean was easy to communicate with. We would definitely stay again! Thanks Sean! This place was perfect. The house was clean and comfortable, and the location truly cannot be beat! Hope to stay here again next year! Sean was a great host. He answered every question in a timely manner. Location is very convenient to everywhere from the boardwalk, beach, and night life. I would def recommend this place for anyone looking to come down for a vacation getaway. The location is awesome in terms of location and cleanliness. All the pictures shown in the description is exactly how the place looks. A lot of sitting area for your group to hang out. The location was right next to the boardwalk with many food options. In between two casinos making it within walking distance of restaurants, gambling and partying. Parking folks across the street were very accommodating for a group of cars needing to park over the Labor Day Weekend. Sean as a host was very responsive to all messages and questions. Great place and totally would stay here again. Great location probably 100' from the beach. Parking across the street for $10 was very convenient. Excellent location. Extra closes to the pier. Property is in a great location, incredibly close to the boardwalk. The photos are very accurate and the property was good value. Our host was extremely accommodating and responsive if we ever needed anything. All in all we enjoyed our stay here! This beautiful 4 bedroom 2 bath apartment is spacious and comfortable. Located in downtown Atlantic City, there are plenty of attractions and activities to choose from. We are located just a couple blocks from the beach, shopping outlets, and casinos. In this exquisite setting you literally have everything Atlantic City has to offer at your doorstep. Great guy! Super nice and really friendly. He went out of his way to make my friends and i feel at home. Whenever I’m back in ac I will return to this place. Great location, great place, but no elevator and lots of stairs. A block and a half from the Boardwalk. Pin on map does not show it properly, but I could not correct it. Space is amazing. Spacious and luxurious!!! You can cook, rest, entertain. The room was clean, and spacious. If you are eyeing that ocean view in the listing picture, you must request an ocean view room. This is a listing for all rooms at the Atlantic Palace. Also, guests who are not staying a full week do not receive house keeping service, but this is manageable as you can request towels, bedsheets, etc. from the front desk. A grocery store and 7-11 are down the block, and the boardwalk is right outside. Overall, good stay. GREAT LOCATION! Newly renovated 2 bedroom apartment (living room, full bath) with a private entrance. Private balcony for with partial ocean view. All points of interest-boardwalk, night clubs, 24/7 casinos. The master bedroom has a queen size bed (sleeps 2), the 2nd bedroom has a full size double bed (sleeps 4), and leaving has a sofa bed (queen) for two. Adjacent to Tropicana. All points of interest-boardwalk, night clubs, 24/7 casinos and restaurants and shopping 10 min walking distance. Great location close to the casinos and NJ transit. Host was polite, friendly, and responsive. Apartment was clean with modern appliances and also adequately stocked with pots, pans, towels, etc. 5/5 would recommend! This place was exactly what we were looking for. Great spot close to everything for my buddies bachelor party. Accommodated the max occupancy comfortably. Easy walk to Boardwalk, Caesars. Responsive landlord. A group of eight of us stayed and the place was worth it! Everything (casinos, clubs, boardwalk and restaurants) was very conveniently close. Host was great and very easy to communicate with. Will say that neighborhood and when you first go up to apt. was a bit sketchy but was totally worth it. Place was very clean and welcoming. Would definitely stay here again! The apartment was very nice and clean. It was easily accessible and close to many of the attractions. My friends and I had an amazing experience and would definitely stay here again! Experience for our group was very good. Location is close to everything 1 block from boardwalk and casinos. I know some have said the area is sketchy, no one bugged us and we had a good time. Know your location, if you're staying in ac and you're not sleeping in a casino you're likely to see some shady characters. Vladimir's apartments were up to date and very functional. He was a great communicator. Price was excellent and as a bonus the weather was phenomenal. Will stay here again. This spot is a great location, we had 8 people in there, so a little tight but it was still no problem. The area is pretty bad like must of the city, but I would stay here again. Clean place very close to the beach and casinos - a little bit of a rough neighborhood but as long as you stay in a group you will not have a problem. Communication was great and everything is as shown in the pictures/description - overall an enjoyable stay. The place it really convenient since it has three rooms and three really comfortable beds. The over all apartment is very spacious and clean. We had a great time and it was a great location, near all the main attractions. We'd stay again! The Atlantic Palace is located between Bally's and Trump Taj Mahal on the Boardwalk! The 2BR unit features a Queen size bed in each BR, a pull-out sofa, and 2 full bathrooms. 2BR Atlantic Palace on Boardwalk! Right on the boardwalk in Atlantic City is where you want to be! This! 1 Bedroom 1 bathroom suite with a kitchen is right in the heart of the boardwalk! Your comfortable suit has a private bedroom/TV, living room/TV and DVD player and a cozy kitchen with a full size refrigerator. Enjoy lounging on the sundeck and overlook the boardwalk and Atlantic Ocean. Take a swim in the outdoor pool then relax in the indoor hot tub. Head to the steam room or sauna. The kids can enjoy the game room as you get your work on in the fitness room. All the casinos are to the left or right when you leave out of the boardwalk entrance of the building. There is covered parking for one car included in your stay also. Regina was responsive and helpful. We enjoyed our stay. Regina was very accommodating and kept us in the loop on all of the details of our stay. The staff at the location was also very helpful. I would definitely rent from Regina again. A.C.BOARDWALK 1BDRM SUITE AT BEACH! Bring your own sleeping gear, use the sofa, or otherwise squeeze in. New carpeting. This is not a private space, but a common area of the house on 2nd floor. Features a bunk bed, full size sofa bed, and room for an air mattress. Crash at my house upstairs with me and my other housemates while attending the convention or out on the town in AC and the casinos. I have other roommates that live in the house with me year-round whom are quiet, clean and keep to themselves for the most part. It is just a few blocks walk to get to the convention center and the Tanger outlet shopping center. Bus terminal and train station are all within a short walk of the house. If traveling at night you may want to take an Uber or taxi to get around which is readily available and affordable. The neighbourhood is best described as an Urban setting. Was located close to the convention center but house was not fully ready yet. Make sure your driving or taking an uber around because the area it is in might not be the safest area. Pretty central location to central Atlantic City ! Direct access to the beach, this Atlantic City condo building is within a 10-minute walk of Ripley's Believe It or Not Odditorium and Steel Pier. Central Pier Arcade and Speedway and Monopoly Monument are also within 5 minutes. Easy access to all casinos. Mandatory Fees: Security deposit is 100.00 U.S. dollars. Cash or Credit is accepted. Mandatory due at check-in. Housekeeping Fees: There may be a 42.00 U.S. dollars for Studio, per stay. Cash or Credit is accepted. There may be a fee of 52 U.S. dollars for 1 Bedroom units, per stay. Cash or Credit is accepted. Fee may be a 62.00 U.S. dollars for 2 Bedroom units, per stay. Cash or Credit is accepted. There may be a 72.00 U.S. dollars for 2 Bedroom units, per stay. Cash or Credit is accepted. Resort Fees: Parking fee is 6.42 U.S. dollars. Cash or Credit is accepted. One car per unit, per night. Additional and oversized vehicles incur added fees. Policy Restrictions: No Pets. Please contact the resort directly regarding its ADA/general service animal policy. No smoking in units: could result in forfeiture of the unit and/or other penalties. Resort non-smoking policy. Limited parking onsite General Urgent Information: Amenities and area attractions are seasonal. Credit card imprint required upon check-in at the resort. Security deposit required upon check-in at the resort. Resort cannot honor unit upgrades or moves. Additional Information: Bicycles are not permitted in the units. There is a bike cage in the garage where bikes can be locked up.
2019-04-22T02:32:56Z
https://www.airbnb.fr/events/2018-NJAFM-Annual-Conference-Admission--Atlantic-City--United-States
But the Quran also confirms that God saved Jesus. This is one further reason for Muslims to revere Jesus. He had a high standing before God, that The Almighty did NOT allow his enemies to kill him. As for now, I’ll leave the verses in the Quran out of discussion (as I do not believe it to be inspired). 1) That the resurrection story in the Gospel of Mark is unreliable, and 2) that the Quran’s teaching that Jesus was saved from the cross is consistent with the Bible. 1). While it appears that some parts of the Gospel of Mark have been added later, it does not follow that the story therein is unreliable. We have three other Gospels, excluding Mark, that tell the same story – not to mention corroborating historical evidence. a. The 11 remaining apostles were all willing to suffer persecution for their strong conviction that Jesus rose from the dead. After the crucifixion, we read in the Gospels that the disciples hid for fear of the Jews. They thought that their Messiah had been slaughtered – worse than that, crucified! He suffered the worst, most shameful and despicable punishment around. Something must have happened to strongly convince these young nobodies to take such a stand in the face of such persecution. Peter was crucified upside down, James had his head bashed in, John was boiled in oil and exiled to the island of Patmos. And for what? For their belief that Jesus rose. Now, if they didn’t actually believe that Jesus appeared to them in person (as the Gospels record), why where did this belief come from? b. The apostle Paul persecuted early Christians. He hunted them down and threw them in prison. He was a Pharisee, a zealous Jew with much authority. But one day, while he was on his way to Damascus, he suddenly had a change of heart and because one of the Faith’s most zealous and successful evangelists. Where did this change come from? 2) The Quran teaches that Jesus was saved from the cross. In your post, you very nicely lay out the argument that this is consistent with the prophets, therefore it makes no sense that he would have died. I don’t think this argument works. Here’s why: as a Christian, I believe that Jesus was saved from DEATH itself. In Geneses, we learn that death is the result of sin. Jesus conquered death because of His sinless life and God’s power upon Him. He died to free us from the grips of sin. The idea of the Christian gospel is this: Jesus came and died for sin, to free us from it, to suffer God’s wrath on the cross for us. The Bible says that no liar, sexually immoral, unbelieving person will enter Heaven, their place is in the lake of fire. But, when we put our faith in Jesus as our Savior, God transfers our sin to Him and His righteousness to us. When we die, God greets us into Heaven as though we were as righteous as Jesus. It’s a free gift that can only be accepted by grace alone through faith in Jesus Christ alone. I am open to being convinced otherwise – but as for now, until I see evidence to the contrary, I must hold to what I believe is true. I’m interested in hearing your response to these things, my friend. Yours too is an interesting contribution, and I promise to write a considered response to discuss the many points you raised. All I want to say for now, is that, this platform has given me a lot of pleasure mainly because it has put me in contact with fine people like yourself and Andrew. People with whom I feel I can have a meaningful, grown up, robust and honest discussion. I would add a couple things… The accounts regarding what happened after the crucifixion are different, but not conflicting. They present the story from different perspectives. Picture it as trying to sort through a case in court with 3 different witnesses who arrived at the scene of the crime at different times and from different angles. Everyone sees something different, emphasizes something different, but all can still do so truthfully. Each perspective is like a camera adding to our 3-D picture, until we see the complete scene. The disciples knew where Jesus’ body was buried. A prominent man, Joseph of Arimathea, placed the body in his own tomb. There were Roman guards at the tomb. The guards went back to the Jewish leaders after Jesus rose and were instructed to lie about what happened, saying the disciples stole the body. These men would likely have been killed for telling the truth. All this adds up to make it unlikely the disciples did not know where Christ was buried. The presence of any narration in several Gospels is not corroboration if you consider the strong possibility of that some Gospels used others as sources. This is the prevailing view amongst biblical scholars, in what they call the synoptic problem. They can not agree on who copied from who, but the consensus is that the Gospels are not independent sources. And yes Andrew, there are some conflicts in the accounts of the NT regarding the resurrection, e.g. all Gospels have Jesus ascending into the sky after 3 days, and with no one witnessing the ascension. Acts 1, on the other hand, tells of an ascension witnessed by disciples and taking place 40 days later. This can hardly be described as the same account from different angles. My post suggests that early Christians had good reasons to believe Jesus was crucified, I arrived at this idea by looking at the Quran narration of what really happened. The Gospel of John is excluded from the Synoptics, therefore it stands as corroborating evidence. We also have good reason to believe that Luke’s gospel is reliable. He carefully investigated everything from the beginning (Luke 1:1-4), and scholars and historians have confirmed the archaeological sites and the places of cities and the reigning rulers, etc. that were revealed in his writings (Gospel of Luke and the book of Acts). It is my conclusion, therefore, that we really have no good reason not to believe him. With that said, you did not address the argument I made from the apostle’s willingness to suffer and die for what they believed or Paul’s conversion (who did not witness the crucifixion). How can you explain this? Also, none of the Gospels (that I’m aware of) state that the ascension occurred 3 days after Jesus’ resurrection. They don’t mention the length of time after the resurrection to ascension. Could you please site the verses you’re speaking of that give you this idea? If I use a history book as a source to write a work of historical nonfiction, does this use make my work false? Not at all! Only if the source is unreliable is my work based upon it necessarily untrue. So our concern here should be finding the first Gospel written (which we don’t know for sure which it is) and then finding it to be either true or false. Jesus ascended into the sky 40 days after the resurrection with the disciples and several other followers present. This is consistent throughout the New Testament. I was referring to the story of the day of Christ’s resurrection when I said it was the same story from different angles. Each account has different people arriving at the tomb and their reactions. If we set aside the Qu’ran’s story of the crucifixion and resurrection to dissect the Christian accounts, we must make several highly unlikely quantum leaps to arrive at the Qu’ran’s account. Thus far, your only reason not to believe the Gospels relies heavily on the account of the Qu’ran. This account comes 600 years after the fact, which makes any truth in it highly unlikely unless it is indeed divinely revealed. I personally find it much easier to believe a story written within 60 years of the even than one written 600 years after. Especially given the fact that Muhammad was probably not even literate, much less a well-researched scholar on the subject. And the hardest proof to discount is the lives and deaths of the followers of Christ who saw him dead and resurrected! You are right about the History Book analogy, but my view is that, regardless of which Gospel was written first, Books that are dependent on each other can not corroborate their own sources. I do not think you would have believed the Ascension and resurrection stories if you did not believe the Gospels were divinely revealed. If any other book carried a similar story about an a resurrection and an ascension that took place two days ago, my guess is, you would not believe it, So it does not really matter if the Quran was revealed 600 years after the event. Prophet Muhammad peace and blessing upon him, never claimed to have researched history. I believe the Quran was revealed to him and that does not require any academic qualification. Indeed it was a demonstration of the divinity of the Quran that a man who can not read or write, can speak with such authority on matters that scholars to this day find difficult to approach. The fact that somebody can suffer or even die for a cause can not be taken as a proof of the validity of the cause itself. Think about kamikaze pilots and suicide bombers, their actions are not proof of the validity of their beliefs. The Disciples of Jesus were righteous people, and Allah praised them in the Quran. They were humans, and not infallible. The Gospels say that they denied Jesus during his stay wit them, so whilst they were great people, they were not gods or even prophets of God. They were believers, humans, and students of Jesus. The author of the book of Acts was not an eye witness to what he wrote. He narrated what he heard from sources that he did not name. So while I do not accuse him of inventing the stories, I can not exclude the possibility that he heard them from unreliable sources. In the absence of information on who exactly transmitted those stories, we have to treat them with a certain degree of skepticism. I hold the same views on the teaching of St. Paul. He did not see or meet Jesus during his stay on earth. His authority was driven from alleged visions, that can not be corroborated. He said many things that were contrary to Jesus own words. Again, his devotion is not sufficient proof of the truth of his teachings. The paper you link to makes one assumption that causes me to question its merits. It assumes that Luke 24 describes an ascension on Easter Day. Luke 24 does not give a time line. Luke writes of Jesus appearing to the disciples and the men on the road to Emmaus, but never establishes when these events took place. Luke later gives a time line in Acts that serves only to clarify earlier descriptions and does not come into conflict with these earlier writings. I realize you believe that the Qu’ran was divinely revealed. You must to be a Muslim. It would be irrational for you to hold to the teachings of Islam and believe otherwise. I was simply saying that the Gospels and the book of Acts are by historical qualifications, far more likely to be true if we set aside divine inspiration or revelation. While we cannot fully academically qualify a religious text, it is now divine inspiration of the Bible vs. divine revelation of the Qu’ran. The proof of claims of divine inspiration/revelation lies in its accuracy in relating historical events. If the Qu’ran does not accurately and truthfully describe the life, death, and resurrection of Jesus, we cannot assume it to be divinely revealed. This would be akin to calling God a liar. The same holds true of the Bible. As for the sacrifices of the Apostles, these cannot be accurately compared to kamikaze pilots. Kamikaze pilots were dying for a cause, not for something they knew to be true. Many of the Apostles were witnesses to what they testified about. These men saw Christ crucified, risen, and his ascension. We have very reliable evidence suggesting that the Disciples and Apostles did not dispute any of the evidence revealed in the Gospels. John, a disciple, wrote the Gospel of John. Peter was a source for another of the Gospels. Mark was likely a young eyewitness to some of the events he recounted in his Gospel. It is widely believed that when Mark writes in chapter 14:51-52 “A young man, wearing nothing but a linen garment, was following Jesus. When they seized him, he fled naked, leaving his garment behind.” that Mark was referring to himself in the third person. The teachings of Paul I must state continually are not in disagreement with the teachings of Jesus. His authority is obviously somewhat different from the Disciples, but he never disagrees with what Christ teaches. The purpose of Paul’s letters were to remind the churches of what they had been taught. His teachings were used to clarify how to practically apply the teachings of Christ within the context of the Church. At this time, the Church was a completely new concept, and was in need of a framework. Paul, as a former Pharisee, was theologically qualified to understand and transmit the teachings of Jesus and lay out a blueprint of what the Church was to look like. Whereas many of the disciples were not highly educated, Paul understood the Jewish teachings and traditions well enough to see how they were all pointing towards the sacrifice of Christ. He understood the intricacies of the Gospel and confidently opposed false teachings within the Church such as Gnosticism. Gnostics were the authors of the Gnostic Gospels such as the Infancy Gospel of Thomas, the Gospel of Judas, and many of the writings used in the recent book The Da Vinci Code. These were intellectual men who attempted to change the teachings on Christ into what suited their own purposes. They sought mysteries that could not be understood by the common man. Christ taught practical lessons in a practical everyday language that would be understood by the people. The New Testament was written in Koine Greek because it was the language of the commoners and was the most widely used and understood language in the world at that time (much the same way English is today). Hopefully, I have accurately and maybe even convincingly presented my answers. I have not been as concise as I would have liked to be, but there is much to address. I agree with you that suffering and death do not stand as proof of a claim. However, unlike kamikaze pilots and suicide bombers, the disciples were in a position to know if what they were saying was true. The disciples were the ones claiming to be eyewitnesses, and in turn suffering for this claim. While this does not prove that what they were saying was true, I think it does stand as proof that they at least BELIEVED that what they were saying was true. They believed that the actual risen Jesus actually appeared to them. How do you explain this? To offer correction, while Luke was not an eyewitness in the Gospel account, he was an eyewitness to the accounts of the book of Acts. Scholars believed that he traveled with Paul, as is evident by the fact that the book of Acts was written in two different languages, based on the locations visited. It is possible that Luke heard his stories from unreliable sources, so I agree that a certain degree of skepticism is required. This is why we should examine his works carefully, and see if he was telling the truth about other things. As I said, geographical, historical, and archaeological finds have confirmed this for us already. All that is left in question is the miracles and the claims to the divinity of Jesus. To answer this riddle, one must look to see if there are any other accounts. While the apostle John is not a historian, his gospel is not included in the synoptics and does qualify as an independent source. Something else that is interesting to look at, is that there are no claims to the contrary of the gospels at the time of their authorship (between 50 AD – 90 AD). During this time period, there were many people still alive who lived to see these events. If the gospel writers did not record them accurately, or lied, why do we not have any of these corrections recorded in public discourse? First, I agree that Paul’s testimony was derived from alleged visions – and this cannot be verified. However, you must account for the change in Paul. What made Paul change? Secondly, what did Paul say that was contrary to Jesus’ own words? From my previous post, I noticed that you have not answered one of my questions. I’ll just copy and paste it below, to save me the trouble of typing it again: Also, none of the Gospels (that I’m aware of) state that the ascension occurred 3 days after Jesus’ resurrection. They don’t mention the length of time after the resurrection to ascension. Could you please site the verses you’re speaking of that give you this idea? Rasheed, I must say that I’m encouraged in that we are having this conversation. It is good to talk about these things! Oh yeah! One more thing! What about Muhammad? His claim is that he saw a vision, and it cannot be corroborated. Why believe Muhammad over Paul? Too many points to comment on, so I will start by discussing an issue that you both raised, about where did I get my supposed date for the ascension. If you read the account in Luke 24, and I will give 2 different translations, that both indicates an ascension on the 3rd day. 24.36 As they were saying this, Jesus himself stood among them. 24.50 Then he led them out as far as Bethany, and lifting up his hands he blessed them. 24.51 While he blessed them, he parted from them, and was carried up into heaven. Not only myself, but the author of the paper mentioned in my previous comment understood the narration to mean that all these events took place on the first day of the week. I do not think it is reasonable to say that a paper is without merit simply because the author made and assumption that you did not agree with. This assumption has been made by other intelligent and knowledgeable scholars and they are not even Muslims. They are people who had access to many writings and manuscripts and are specialists in the Bible and its history. Happy Eid and Merry Christmas to you as well! My attention was caught by the word “still” in this passage. What if what this passage is really saying is not that it was while they were speaking of it on the same day, but while they were still speaking of it at a later date? What if 40 days later they were still discussing it? I agree that this is not clear language, but I don’t think we can automatically assume because something is not clear to us that it is necessarily incorrect. I think we need to search first for ways to harmonize these passages before we say they are in conflict. My original post was an attempt to say that It was possible for everyone present to be wrong, if God decided to save his prophet. Remember that the Gospels say that Jesus appeared to his disciples in a way they were unable to recognise him, he spoke to them at length without them realising that it was Jesus in their company!! I hope you can see that the word ‘still‘ most probably means on the same day and not 40 days later, particularly when you take into account the other new translations I mentioned. Now, you say that John the Apostle wrote a similar account of the crucifixion, and you added that Mark was believed to be an eye witness. Are you certain that the Gospel of John was written by the disciple John?, because many notable biblical scholars disagree with this assumption. How did you arrive at the notion that Mark was referring to himself as the little boy? Is it possible that the disciples did not contradict the Gospels account because they never saw those accounts? After all, there is a view amongst many biblical scholars that the Gospels were written after the death the disciples, or maybe additions were put there later as is the case with some of the examples we discussed. Do you accept that there is uncertainty about the dates, authorship, text of the Gospels?, that is unless you subscribe to to the evangelical view of constant inspiration. Also, did the authors of the Gospel ever claim that their text was inspired? Thank you for your reply. I apologize it took me so long to respond. I hope you had a happy holiday. The question is, why were they unable to recognise him? Was it because the person walking beside them was not actually Jesus? Or was it because of some other reason? According to my view, God kept them from seeing Jesus that the disiciples might be tested. From what I understand, nearly all evidence suggests that the apostle John is the author of this gospel, as well as 1, 2, and 3 John, and Revelations. The author is “the disciple whom Jesus loved” (John 13:23; 19:26; 20:2; 21:7,20,24). He is not mentioned by name in the Gospel which would be natural according to cultural custom if he wrote it, but hard to explain otherwise. It is also well attested to in the early Church as early as 140 AD (about 50 years from the authorship of the Gospel, depending on the date given). I’m glad you’ve done some research on this yourself. What scholars do you speak of? What is their evidence? So, here’s how I see it: what does the textual evidence say? Perhaps there were errors and interpolations – but not in EVERY copy. The errors and interpolations would have been isolated to a few copies, by the interpolators. The idea is to bring together the text in light of the agreement of the majority of ancient manuscripts. When all copies are viewed side-by-side, it is easy to see which have errors and which have additions that were not in the others. We can discuss this much further if you like. What are you thoughts? There’s not much reason to believe that the Gospels were not written by the apostles themselves (or at the right hand of the apostles). If you think I’m mistaken (and it’s always a possibility), please point me in the direction of some of the evidence that is drawing you to this conclusion. Well, this depends on what you mean by “uncertainty.” Am I absolutely 100% certain that the apostle John wrote the Gospel of John? Nope. But I have good enough evidence to believe it beyond a reasonable doubt – which is the best I think we can hope for in any given situation. So, yes, I accept that there is a possibility our dating, or claim to authorship – or whatever – could be wrong. But I don’t think it is. I’m open to being convinced otherwise, but I’ll need some evidence. Until then, I must follow my conscience. I’m glad we’re having this dialog. I think we’re going to learn a lot from each other. Your claim is that it was made to look like Jesus was crucified. I want to learn exactly what you mean by this. Are you saying that He was actually crucified, and pulled down before dead, or that someone died in His place? What did they do with the body? And how did all of these people get so mistaken? I thought it was interesting that you said of Paul that his authority was driven from alleged visions, that can not be corroborated. What about Muhammad? His claim is that he saw a vision, and it cannot be corroborated. Why believe Muhammad over Paul? Thank you for the comment, I did have a good holiday and hope you too had a good holiday. I can only tell you what Allah has told us in the Quran, that he was NOT crucified, so it was not a case of Jesus being pulled down before dead. There are various theories “extrapolations” for example some scholars suggested that the person who betrayed Christ’s whereabouts was made to look like him, and it was that person who was crucified, others said it was one of the soldiers who came to take him. I can not say which of these scenarios did actually happen, it could be something altogether different, because this was God saving his prophet, and there are no limits to the power of Almighty. This, I think, answers your question about how did all the people got mistaken. If it was possible for disciples – who were very close to Christ- to be kept from recognising him, then of course it is also possible for a lot of others to be mistaken about his identity. As for the second question about why believe Muhammad over Paul, I am about to finish a post on the subject and would love to have your feedback on it. It seems you’re starting with the presupposition that the Quran is true – in spite of what the other evidence tells us. So, my next question is, why should I believe the Quran’s account of what happened? Of course the whole post was about explaining what the Quran told us about the crucifixion. As to why should you believe the Quran, I will have to ask you whether you read the Quran or not. If you did not, then I think it is unreasonable to pass judgment on its truthfulness before having read the book yourself. If you did read the Quran and you are not convinced, then of course this your decision. The Quran speaks for itself and I confess that I can not be more persuasive than the Quran itself. If you want to discuss the Quran, after reading it yourself, then I will be more than happy to engage in the discussion. Deal. It will be a while, but I’ve been meaning to do an in depth study on Islam and the Quran for a while. I have a big problem with having to have faith in spite of what evidence tells us (in a certain regard). My reasoning here is that God is true – therefore the evidence should reflect this. If God says that He came as fire on top of a Mount Sinai, then there should be a mountain somewhere in the middle eastern desert that is burned black on top. If He says that Jesus was crucified, history should attest to this. One more question: have you read the Bible? I read the Torah (5 Books), some of the Prophets, the 4 Gospels, and Acts and only a few epistles. I read many works which are considered Apocrypha and I am continuously reading what becomes available in English from the Dead Sea Scrolls. I read the Gospels and parts of the Hebrew Bible in both English and Arabic. Yes, God could close the eyes of the seers if He wanted to. However, we have no reason to believe that it wasn’t Jesus who was crucified. I will again refer you to my original post: I said in that post that Chtistians can NOT be blamed for believing that Jesus was crucified, until the revelation in the Quran of what really happened. I add here, now that the Quran has been revealed, and God sent his messenger Muhammad pbuh, He told us in it, that Christians (and others) who disbelieved the messenger and denied the Quran will be accountable in front of God, not for believing the crucifixion took place, but for shunning the message of God, and for insisting that God is triune or has partners. Finally: I am rather pleased that you ask many questions. This blog is a magnificent learning aid, as it allows me to have direct contact with some knowledgeable believers from other faiths. People who take their faith seriously and are genuinely looking to please God, while being respectful and reasonable to other opinions. I had glance surf on the topics and discussions on your blog. I must say it is interesting and highly provocative! Certainly this debate entices strive for further HARDQUESTIONS! It is enlightening though and stimulates people for a BETTER reading in what we keep in our bedrooms and cars.. the Holly Book!! May Allah reward you for your good deed. I think the Qu’ran a very significant flaw in its reasoning on this subject. Jesus himself declared himself to have been dead and resurrected. When Thomas, his disciple, doubted the accounts of those who had seen the risen Christ, Jesus appeared to him and told Thomas to put his hands in the nail holes and in his side where he was pierced by a spear. If the Qu’ran is correct and Jesus did not in fact die, then he clearly deceived his disciples. This calls into question his character and nature, and the very nature of the God of whom you believe him to be a prophet. I’m not sure about Allah, but the God of Christians and the Jews is not a deceiver, nor is His only begotten son Jesus Christ. This is a logical flaw too significant to ignore. Andrew’s logic takes us no where!!! He is using what the Bible says to make his judgment on what the Quran says. Well, how about this, when Moslems do the same thing what will be your response? I say that Rasheed stated a very valid point in his initial post when he said that the two crucifixion accounts (that of the Bible and the Quran) cannot be reconciled, and reconciliation should not be the objective… the goal should be to simply understand the other side. Christians don’t accept the whole Quran (not just its account of the crucifixion) because the Bible tells them so. Same thing on the opposite side.. Moslems don’t accept the Bible because the Quran tells them so. So it is actually the two books themselves that reject one another. So what’s the point in saying that the Quran is wrong because what the bible says is different, or that the Bible is wrong because what the Quran says is different? Just try to understand what the other side says and ask questions that will help you find out more details (about your faith and about the other faith). I am not using what the Bible says specifically to make a judgment on the Qu’ran. I realize the two accounts cannot be reconciled. There are some deep issues that the Qu’ranic account cannot explain. The point is that the Qu’ran’s account would have to mean Jesus misled his disciples. They didn’t just assume he was resurrected. They saw his resurrected body! Thomas, a disciple, put his hands in the nail holes in Jesus’ hands! Jesus himself predicted his death many times beforehand and several of his followers witnessed him die. He also told them afterwards that he had been resurrected. These were the same men that would be killed for testifying that Jesus was raised from the dead. Why would they testify even to the point of death if it were not true?! The Qu’ran can give no answer to this question, because there isn’t one. We have little logical reason to believe the Qu’ranic account of an event that happened 600 years beforehand as historical. The Gospels were written with the intention of being historical accounts. They set forth claims based in historical events, as any biography of that time would. The Qu’ran sets forth attempted historical claims that fly in the face of any evidence we have. And the only reasoning we have for these claims is because Allah told Muhhammad. As a Muslim, Rasheed has consistently questioned the reliability of the Apostle Paul because he “saw a vision.” Could we not, as Christians, question Muhammad the same? The difference between the two being that Muhammad’s vision was never corroborated by other witnesses, whereas Paul’s vision caused the men with him to hide and changed Paul from the most vicious persecutor of the Church to one of its most cherished Apostles. This cannot be explained away. Additionally, the New Testament tells us that 5oo people saw the risen Christ. And I adamantly disagree with your comment that “Christians don’t accept the whole Qu’ran because the Bible tells them so.” The Bible never specifically tells us not to accept the Qu’ran because the Bible predates the Qu’ran. Christians do not accept the Qu’ran because it is the radical, contradictory, and unjustifiable claims of one man that were enforced by the sword. Christianity spread not because of the sword, but in spite of it. Christianity spread even as its adherents were tortured, killed, thrown to the lions, etc., because they were not dying for a belief, but for something they knew to be true. They saw the risen Savior and could not deny what they had seen with their own eyes. The underlying issue is that the Bible and Qu’ran are mutually exclusive, and the logical evidence falls heavily on the side of the Bible. Historically, textually, prophetically, number of witnesses, transmission, etc. The Qu’ran attempts to cherry pick characters from the Bible as prophets, yet it disagrees with the majority of what they said and did and who they claimed to be. While I am engaging with Rasheed and attempting to understand what the Qu’ran says and why, I cannot sit idly by complacently as the Qu’ran ignores and denies who Jesus is and what he came to do. The beauty of the Bible is not rhythm or language, but the underlying message -the message that God, in spite of our wickedness, humbled Himself, and came down from heaven to live among us. He did this to conquer death and sin by dying and coming back to life, that we would not have to pay the price for sin -of eternal separation from Him. This is a story that began in Genesis (1st book of the Bible), and the New Testament tells us that Jesus completed this work, and took his seat in heaven. There is no need for another prophet or book. God’s plan for the redemption of mankind has been fulfilled, and all that is left is for us to accept the gift of salvation that He freely gives. Although you tried to deny it, the bulk of your comment above precisely proves my point about the fact that Christians reject the Quran because the Bible tells them so. I never said that Christians don’t accept the Quran because the Bible predates the Quran. That was not part of my argument. My argument was that the information given in the Bible is what forms the basis upon which Christians base their critique of the Quran.. this is what I meant when I said the Bible tells Christians not to believe in the Quran.. Human logic does not operate without facts.. when discussing the crucifixion, you as Christian will operate your logic on information taken from the Bible and consider those information as given facts, and that becomes your criteria for investigating and judging the Quranic account. Please go back to your comment and read it over, and listen to yourself. Every piece of information you used to show why the Quran is not acceptable to Christians (whether in regards to the crucifixion or whatever else) is based on the Bible. Did you get this information from anywhere other than the Bible?!! In your second paragraph your say the Gospels were “written with the intention of being historical accounts.” Well, how much history can you get about an event when even the main witnesses themselves (the disciples) were not sure about what they were experiencing?! But what’s most important is that you are not reading the Bible as just another historical account, but as the word of God, an inspiration! This makes it Infallible in your point of view. Which means if there are errors they have to be in the other accounts not in the Bible! As for the disciples’ percussion, how can you be sure the early Christians were being killed and persecuted because they were testifying that Jesus rose from the dead? Maybe they were being killed because they were saying that Jesus was not killed and that he was lifted up to heaven (which would agree with the Quran when it says Jesus was not killed nor even crucified but Allah lifted him up unto Him). In the same paragraph you say: “The Qu’ran sets forth attempted historical claims that fly in the face of any evidence we have.” Aren’t you again talking about the “historical accounts of the Gospels” as the evidence?! Again that’s referring to the Bible for evidence. What do we know about accounts of the crucifixion in the deleted gospels? The currently existing Gospels where not the only ones in existence… what do we know of those gospel’s accounts of the crucifixion? As for your comment about Allah telling Mohamed what happened, well for a Muslim Allah is the true God, so naturally for a Muslim that’s enough reason to believe the Quranic account, so a Muslim does not have a problem here. You have a problem because you don’t believe in Allah or the Quran, and again that’s because the Bible is standing in the way. As for your comparison between Mohamed and Paul, I am sorry but I don’t see that fact that there were men hiding as “corroborating” to Paul’s experience. Clearly when someone hides from the scene they miss the whole action and therefore cannot testify in corroboration to or against the account. But as for the change which happened to Paul after that event, I prefer to look at the change which happened to Christianity and to the teachings of Jesus Christ after that event. On the other hand, the Quran was not a vision, it is a sequence of passages containing words which Mohamed received and gave to the people around him. These words, while intangible, they are given to the people with the underlying claim that they are from the true God. So people can read the words and reason about them and decide whether to accept them as the words of God or otherwise fabrications of Mohamed. In other words, receiving a book from the true God does not require witnesses, the words of the book themselves are the test of Mohamed’s credibility. Mohamed’s history as one who never told a lie is also a testimony. Now, as for the change which the Quran produced on Mohamed, it is clear, he became a great messenger, prophet, and leader. He live for 63 years; 40 before the start of his mission and 23 after. 13 years of his mission in Mecca and the last 10 in Medina. In Mecca for 13 years he never used his fist or any form of violence against anyone. With only the words of the Quran he transformed the lives of a great number of followers from all walks of life. The sword was eventually used against him the night before his flight to Medina when the 40 tribes (Quraish and its allies) elected 40 young men (one from each tribe), gave them swords and they waited at the steps of Mohamed’s door waiting for him to come out early around dawn so that they jump on him and kill him. The plan was devised as such so that Mohamed’s clan (who was still not Muslim at the time) won’t be able to fight all the 40 tribes which participated in the assassination. Mohamed was saved by a revelation from Allah. The revelation told Mohamed about the plan and told him what to do. He was told by Allah’s angel to take sand in his hand and through it at the men. He did and they all were made to go into sleep. They later woke up at sunrise but it was too late because he had already reached far beyond the limits of Mecca with his companion (Abu-Bakr) and hidden in a cave. While in the cave a pigeon and a spider covered up on them when their enemies were searching close by. The plan of the idol-worshipers of Mecca was not just to put an end to Mohamed’s life but to the whole Muslim community and their religion altogether. Clearly, Allah saved his messenger and Mohamed made it safe to Median. As a natural result of these events, the state of affairs between Mecca and Medina entered a state of war. By the way, the above story is a historic account and it is not stated in the Quran. Further on, you say: “Christians do not accept the Quran because it is the radical, contradictory, and unjustifiable claims of one man that were enforced by the sword.” I say, neither Mohamed nor any of his successors forced anybody by the sword to accept the Quran. In all the lands conquered by Muslim armies people were left free to live and to decide about Islam, and they all accepted it by their own choice. Even today while we speak, hundreds of Christians convert to Islam in the west every day. What type of sword forces these people to accept the Quran? And by the way, which holly book did the Crusaders and Queen Isabella of Spain were following? The Quran or the Bible? Some of the Christians who were being persecuted by the Roman Empire and some Jews who were also persecuted before them landed close to Mecca and Medina and they had different accounts than today’s Christianity, and they even prophesied the coming of Mohamed. Clearly Christians don’t know about this because it is not recorded in the “history” which they are aware of. You say: “The underlying issue is that the Bible and Qu’ran are mutually exclusive, and the logical evidence falls heavily on the side of the Bible. Historically, textually, prophetically, number of witnesses, transmission, etc.” It is amazing how you sound so sure about this because it is the typical viewpoint of Muslims towards the Quran, that it is superior historically, textually, prophetically, number of witnesses, transmission, as well as scientifically! I don’t expect us to be debating these points here, and if you are I am sorry I won’t be able to join you. But if you ever meet a Muslim and get into a discussion with him/her comparing the Quran and the Bible, both of you will be using logic to conclude some evidence to support your point. But when logic needs facts is each of you going to use his holly book as reference? To me, this is the underlying issue. I suggest Christians, Muslims, and Jews, when they get into a religious discussion, each of them should forget about his background, and consider one thing only that they are all children of Abraham. Now assume you were actually Abraham’s sons, and that you had been living with Abraham in his time, but that you had gone to sleep one night and when you woke up in the morning it is the year 2008 already. Now, how, as brothers and children of the same father Abraham, are you going to deal with the religious differences of the three religions Judaism, Christianity, and Islam? You have to keep in mind all the time (at least while in the discussion) that you are brothers and that you are not followers of any of the three religions, only followers of Abraham’s religion. As for the case when the one of you is discussing his religion with a person from outside these three religions, you can use your scriptures as you like. The early Christians were killed for their testimony that Jesus died on the cross and rose from the dead. Are you suggesting that the Biblical accounts lie? That would be a strange thing to lie about, and what you are suggesting makes no sense. There are no deleted Gospels. There are “gospels” written over a hundred years after the fact by Gnostics attempting to change the story. These are not truthful or historical in any way, shape, or form. If you were to read any, you would find them laughable. There was even one, The Secret Gospel of Mark, which was forged in the 20th Century. The Gnostic philosophers tried to change Christ to fit their existing teachings. The men with Paul at his conversion saw a bright light and hid. They did not see Christ, but they saw something to corroborate the fact that Paul had a vision. He was then blinded temporarily and healed by a Christian who God told in a vision to go to him. Again, if you can’t show how Paul changed teachings, don’t present it as evidence. It’s just not true. You may “prefer” to believe this, but I could “prefer” to believe that the moon is made of cheese. I would be entirely wrong. The words of the Qu’ran only are not proving anything. It doesn’t take much for a person to cut and paste religions together when surrounded by Christians, Jews, and Pagans as Muhammad was. Even the name Allah was the name given to the rain god in the Kaaba at the time. A good analogy might be to hand someone a recipe and ingredients and ask them to cook dinner. Sure they could cook it, but it wouldn’t be the same as having a chef make it. You bring up the Crusades, but one thing I would urge you to see is that while people may fight “in the name of Christianity” they are not following Christ. The word “Christian” means Christ-like. If one is not following the teachings of Christ, it doesn’t matter what they claim to fight for. I am sorry the Crusades have tainted your view of Christianity, but they were nothing more than war lords using the name of Christ for their own ends. Unfortunately, the Church has a long history of being hijacked by the power hungry. You speak of Jews and Christians in Arabia having different accounts. If you read up on Church history, practice, and teachings, there are many who brought false teachings into the Church. Please don’t assume that because their story is different that they are correct. These may have been “Christians” and Jews who lost touch with their teachings in their new culture, or they may have even been sent away from their own people for what they taught. The Biblical manuscripts we have are early and extremely well transmitted. We have copies of the Gospel of John from as early as 35 years after the original. The teachings haven’t changed, although people’s application may have changed. I don’t see how you can say the Qu’ran is superior historically. It was not written as a historical book. If the Qu’ran is superior prophetically, where are your prophecies? I could give you several from the Bible off the top of my head in Daniel, Isaiah, Psalms, etc. If the Qu’ran has a superior number of witnesses, why did no one witness Muhammad’s revelations from Allah? If it is superior in transmission, why was it not written down in the lifetime of Muhammad? Why did Muhammad die suddenly and unexpectedly? What scientific evidence do you present for the Qu’ran? If you’re going to lay out these points, show me facts and passages. I would love to be able to reconcile our differences as children of Abraham. The only problem is, your religion says Abraham is someone completely different than the Abraham of Christianity and Judaism. Our Abraham “believed the LORD and it was credited to him as righteousness” (Genesis 15:6). He did not follow Five Pillars or attempt to work his way into heaven and pray Allah was merciful. He simply had faith in God’s promise to him. We don’t serve the same God. I wish we did, my friend. I wish we did. The early Christians were killed for different reasons, and the accounts of the Bible settled on a single point view after lengthy disputed differences among various parties. In the very early days, just the belief that Jesus was the Messiah was enough to get you killed, and the prosecutors won’t wait to hear your opinion about his resurrection. I did not suggest that the Biblical accounts lie, but I do believe that the theological differences after Jesus’ ascension into heaven were so vast and the debates reflect how the scriptures evolved. The above point is connoted to my point about gospels which were deleted. You cannot throw in something that was fabricated in the 20th century to prove that there were no gospels deleted. Nevertheless, the fact that there were Gnostics, suggests that most likely there were other schools of theology which had other gospels and accounts. The fact that we have canonical gospels implies that there must have been other non-canonical gospels. And these could include not just the Gnostics and the few other sects you know about, but other sects as well which you don’t know about … such as those which ran away into the desert of Arabia, and who knows elsewhere. I don’t see how those who ran away to the desert have been influenced by the cultures to which they ran away to, especially those who ran to the Arabian Peninsula! Arabs in the peninsula were pagans and did not know anything about the coming of a new prophet.. They had never had a prophet before so they could not have possibly received a prophecy about a future Arab prophet. Therefore, run-away Christians and Jews could not have possibly picked that up from the Arabs. To the contrary, it would be the other way around, it would be those Jews and Christians who would tell the Arabs about a new prophet. Could we forget about Paul, please? Let’s talk about Abdo (that’s me) for a moment. Abdo says he saw a vision, there were men with Abdo who did not see the vision that Abdo saw. They only saw a bright light and hid. A Muslim man who saw a vision (which also was not saw by anyone else) came and cured Abdo from blindness. We don’t know that Muslim man’s name or the men who were with Abdo, but we believe Abdo to be telling the truth; despite the fact that Abdo is ready to lie sometimes (for the sake of Allah in order to abound the followers of Allah, Romans 3:7)! After the vision, Abdo became a devout Muslim, but he saw some obstacles which prevented people from becoming Muslim. Although the majority of Muslim scholars insisted that no one can be Muslim without observing the five pillars, Abdo taught that it is ok for new Muslims to drop one or two pillars if they find them daunting! And never mind about circumcision anymore because our cousins (in Christianity) have dropped it already a long time ago too. If you find the words of the Quran to be a cut-and-paste out of the Bible, I challenge you to give me one single sentence which is copied from the Bible (NT or OT). It seems to me that cut and paste was the business of the synoptic writers. It is the first time I hear that the name Allah was the name given to the rain god in the Kaaba! If so, is that why Christian Arabs (and Jews) still refer to god by the word “Allah”? Have you seen an Arabic Bible before? Take a look at this Website (http://www.waterlive.org/), type the word الله in the search field (of course you can copy and paste it) and click the button next to it. I just did that and got 1010 hits (one thousand and ten). Hits came from both the OT and NT. What is this word “الله” doing in the Bible?!! Do you also know what is God in Aramaic? Did Abraham or Moses or Jesus use the word “God”? What name did they know for God? The Quran and its words is the ultimate test for Mohamed’s truth. The claim is very simple: Mohamed claimed that he was a messenger from God and that he was given the Quran which is the word of God. Muslims believe in the above statement … their knowledge of Mohamed and of the Quran makes them accept this claim wholeheartedly without any doubt, and that’s it for them. If you know anything that can show that Mohamed is not worthy to be believed, and that the Quran cannot be from God, you have to present it. I understand how difficult it can be for someone whose mother language is very remote from the source text. Don’t you just envy the Muslims in general and the Arabs in particular because they are in direct touch with the original text of their scriptures?! Can you imagine what it would be like for you if your mother-tongue was Greek or Aramaic and you had access to the original scriptures of the Bible in these languages? Well, the Quran is written in Arabic, and Arabic is a living language spoken by hundreds of millions of people in Arabic countries, and is taught as a second language in so many Muslim countries. All of them testify to the power of the Quran both for its language and the truthfullness of its content. I mentioned the Crusaders and Queen Isabella to give examples of how Christianity was forced on some people by the sword, and I am sure that anyone of those who were forced to convert to Christianity by the sword and got a chance to escape had reverted right as soon as they felt safe from the oppressors. You still did not tell me what type of sword forces Christians in the West to convert to Islam. Also, what about the Muslims who come to the West? There are Millions of them. If they were Muslims because they had been forced to, why don’t they renounce Islam when they settle in the West? Among the millions of Muslims living in the West, what percentage of them renounces Islam? To the contrary, Islam in the West is in continuous expansion. You advise me “please don’t assume that because their account is different that they are correct.” Well, I am not making that assumption.. but it seems to me that you are the one who is sure that if anything is different than the Bible’s account then it is wrong! When you say “we have copies of the Gospel of John from as early as 35 years after the original”, you are not telling us how long after Christ were the originals written, and you sound like saying that 35 years are nothing. The Soviet Union was the guardian of communism in the early sixties, and by the late eighties (almost 35 years afterwards) they had miserably lost the cold war and Ronald Regan was commanding Gorbachev to tear down the wall of Berlin. That was a “political” defeat, I must agree, but it was underlined by a major ideological defeat. And the lesson to be learned here is that when a religion/ideology is still fresh it is most susceptible to alterations especially when it is subjected to tremendous pressures and attacks. When it comes to Christianity, the susceptibility is magnified due to external and internal factors. External factors being the tremendous Jewish-Roman enemity and pressure, and internal factors being the great uncertainty and disagreement concerning differnt issues especially the nature of Christ and the way his life on earth was ended. The comparison between the Quran and the Bible has been extensively studied, discussed and debated. I don’t think there is room for us here to reenact the debate, but the Web is full of material on the topic. Just Google it and you will be overwhelmed. You ask a number of questions about Mohamed and the transmission of the Quran. You ask why did no one witness Mohamed’s revelations from Allah? I say, no one needed to witness the revelation, because basically the revelation is “words”. All we need is to listen to the words and make our own judgment on whether these words can be from God or are simply Mohamed’s own. That’s all. People around Mohamed never asked to be involved in the revelation process (ie. to witness it in one way or another). They were intelligent human beings like all humans, and they could use their minds to make their own judgments. Especially when all what Mohamed presented them with were words from their own language. If I tell you now that this long comment you are reading (which I have been writing for over a month now!) was revealed to me from God, won’t you be able to tell that this is false?! And it won’t be so hard for you to present strong arguments to show that. Same thing, the people around Mohamed could, and anyone today can, apply any number of tests to examine Mohamed’s claim that the words of the Quran were revealed to him from Allah, without having to be present during the revelation process. You also ask: “If it is superior in transmission, why was it not written down in the life time of Mohamed?”. My answer, again, it did not have to be written down, neither in the lifetime of Mohamed nor after. In fact, I personally believe that verbal transmission is more superior than written transmission. The words of the Quran were inscribed in Mohamed’s mind and when he recited them to people around him the words were again inscribed in their minds because they memorized them by heart and rehearsed them continuously. Mohamed lived for 23 years and all his companions took the Quran from him verbally and memoriezed it during his life. Until this day, memorization remains the primary method of transmission of the Quran. If you read a passage in the Quran and make a mistake, and in your audience there is a Mulsim who memorizes that passage, he will right away stop you and make sure you make the correction. Writing down the Quran started during Mohamed’s life but only after he died that it was “compiled” into one book. The main reason for compiling the Quran was that Islam expanded to various regions where Arabic is not the mother tongue. It was necessary, therefore, to record the Quran in a standard official volume which serves as the reference when people ever come into differences about it. This volume was compiled by a syndicate of the people who received and memorized the Quran from Mohamed directly during his life. You ask: “Why did Mohamed die suddenly and unexpectedly?” Who says that Mohamed died “suddenly” and “unexpectedly”?! You ask: “What scientific evidence do you present for the Quran?” I say, there are passages in the Quran which talk about various natural phenomena. The knowledge presented in these passages is impossible to have been known to humans at that time because only later that scientific observation could assert such knowledge. Therefore those passages with scientific knowledge could have never been written by Mohamed or any human and therefore they must come from God Almighty. You can find various materials on the Web; just give it a little effort. Finally, your comment about Abraham really made me sad. I always thought that the three religions have almost nothing to disagree about when it comes to Abraham. But it is clear to me that your perception of Abraham has problems, and I invite you to re-read the Bible to correct your perception, and to find that there is a lot more agreement on him among the three religions than you think. The statement you cited about him from Genesis echoes with the typical description of Abraham in the Quran. Even the five pillars; do you have any doubt that he: 1) testified that there is only one god, or 2) that he used to pray to God, or 3) that he used to fast from food, or 4) that he used to pay charity to the poor. Furthermore, hajj (the fifth pillar) to the Kaaba, it is he who actually built the Kaaba and started with his son the ritual of annual pilgrimage to it. Do you have any doubt that he used to do these things and other acts of righteousness in order to gain enough credit with God so that he will deserve the mercy of God and get into heaven? Yes he did have trust in God, and because of that trust he was ready to obey God’s order and kill his own son as commanded by God. His son too, did not resist and was welling to abide by God’s command. His trust and obedience won him the love and mercy of God, and this is what we learn from Abraham and his son. That when we trust in God and be obedient to him, and preserve acts of worship and righteousness we will win his mercy and enter his paradise. The ram was a symbol of God’s mercy: that those who pass the test don’t have to worry anymore. The more Abraham’s faith and trust in God grew, the more God’s trials to Abraham grew, and the test of killing his own son was the climax. There can be no greater test than that, and only the greatest among humans may pass such a test. Abraham passed it, and he deserved God’s mercy and assurance that he shall never have to grief again. When Abraham asked God to give the same to his children God said only the righteous deserve My assurance. Fortunately most people don’t have to go through the same tests as Abraham did, but I am sure no one will deserve God’s mercy without having believe in God and to work for his mercy. No one is immune from sin, immune from punishment for his sins, and no one shall bear the sins of another (The son shall not bear the iniquity of the father, neither shall the father bear the iniquity of the son). I don’t know where to even begin addressing your response. You have undoubtedly misunderstood Christianity, how it was formed, and how it has been abused. In its formative period, the doctrines of Christianity were not uncertain, but there was opposition. As men saw the success and miracles of Christianity, they began to adopt some of its teachings, adding their own twist to them. Paul, John, Peter, they all had to address false teachings in several churches. You must read Paul through a Christian understanding, or you will continue to wrongfully think he changed the message. As to the “other gospels,” these were books written by the false teachers I described above. If someone had written a book that went against the Qu’ran would Muslims accept it? No! The same goes with these “other gospels” which were not written until the 2nd century or later. The core doctrines of Christianity were formed immediately upon the Day of Pentecost as the Holy Spirit descended upon Christ’s followers. We see this in Peter’s sermon in Acts 2. Christ crucified and raised to life for forgiveness of sins. That was the Gospel. I understand your reference to the Crusades, but you must understand that the Crusades and those who supported them were not following the Bible. They followed their greed and hatred, manipulating the Bible along the way to make it say what they wanted. I am sorry that they gave you an improper view of Christianity, but I hope you can understand that this is not what Christianity is. Christ called his followers not to a physical war, but that our battle is with the Devil and his minions, fighting for the souls of men. As for Abraham, the Bible does ascribe to him at least the first two pillars. I’m not sure that there is mention of him fasting or giving to the poor. The fifth pillar is not mentioned anywhere and was a legend of Arabia, not a biblical teaching. It was used to legitimize a connection from Muhammad to Abraham. If you will focus on the story of Abraham sacrificing Isaac, as you recounted it, you will see that there is something much greater being alluded to here. This is the message of Christianity; that Christ was that ram, the perfect sacrifice that takes our place. It is only by God’s mercy that Christ took our place in death and lives that we too may have life in him as our savior. That is Christianity; that we can have assurance that the blood of Christ covers our sin. That we no longer have to rely on five pillars, or our own righteousness because in us God sees the righteousness of His Son. It is a salvation based on grace, on God’s terms, not on our own works. It is ok if you see that I had misunderstood Christianity, because I too see that you have misunderstood Islam. After all, we are here to discuss and explain our differences so that we decrease our misunderstandings, and learn how to respect one another. In the paragraph where you talk about Christianity’s “formative period”, I understand that you are referring to the period after Jesus’ departure. If that’s correct, then I must say that this sounds somewhat odd for a Muslim, because the formative period of a religion should normally start from the beginning of the mission of the prophet or messenger of that religion and should finish by the time he departs. But in the case of Christianity I guess things are different because the main doctrines of the Christian faith were formed by Paul, and that’s why Paul is typically referred to as the “father of Christianity”. With Islam it is different. Islam is not ascribed to any Muslim after Mohamed. Even Mohamed himself does not claim to be the originator of the religion. Islam is the religion of Allah. It is based on the same articles of faith which God taught to all His prophets and messengers. With the last messenger, Mohamed, God completed and sealed His religion, Islam, and His succession of prophets. After Mohamed’s death, the formation process finished, and no one could add or take away any item of the religion (whether doctrine or law). For Christianity, the case is clearly different. The formation process started and finished after Jesus’ departure. Now you are saying that in that formative period, the doctrines of Christianity were not uncertain but there was opposition. The interesting thing to me is that some of these oppositions came from Jesus own disciples who were there during Jesus time on earth, when Paul himself was not! Which makes you wonder who was actually opposing who, and who was actually adding his own twist to the original doctrines? You can of course open the Bible and show us that what Paul says is in complete conformity with the Bible, but my point is that the Bible itself is the product of the formation process. That process involved different doctrines, explanations, opinions, etc. and eventually only one set won over the rest and that’s what came to be Christianity. Everything else was omitted once and for all and was declared heretic. Can we use the NT as terms of reference to examine the other sects when the NT itself is the product of one of those sects?! In Islam, the Quran is the reference because it was completed during Mohamed’s life. Is there anything comparable to that in Christianity? Is there any piece of scripture that was with Jesus during his life to which all Christians can refer and justify themselves according to it? If I read Paul and his writings, how is it possible for me to be certain that what he wrote conforms to what Jesus came with? Who and what can be the judge between Paul and his opposition (the “false teachers”, as you describe), judge among the different sects, and most importantly judge between Christians altogether on one side and the Jews on the other side? As for the Crusades, I am sorry to tell you that it is very hard to convince anyone in the world (not just Muslims) that the Crusades were not following the Bible. Can you refer me to any publication from the Church making such a statement? I don’t think you will find any Church during the crusades time that could have possibly opposed the Crusades. So, any opposition you present will naturally come from a modern Church. But then, this makes me ask the question of whether it is legitimate for modern churches to annul the teachings of older churches (older Churches supported the Crusades). I agree with you that “Christ called his followers not to a physical war, but that our battle is with the Devil and his minions, fighting for the souls of men.” But what would you say to your Church leaders when they tell you that the Devil has manifested in a person, or that the devil has created a new religion and that religion now has a nation and this nation is threatening the nation of God, and that every single member of this nation must be killed because he is a son of the Devil? Do you have authority to object to your Church leaders? Will you take a position that’s any different from that of the average Christian in Europe during the Crusades who had complete trust in his Church leaders and followed them unquestionably? It would have been a lot different if the Church was supporting the Crusades with the aim of merely overthrowing Muslim rule and establishing Christian rule in the Holly Land without affecting the lives of the people of the land. But that was not the case. The aim was to cleanse the Holly Land from the “Saracens” by killing every single one of them. And that aim was supported by the Church. I have no problem with Jesus Christ. I understand very well that he has nothing to do with all this history. I believe in Jesus Christ as a messenger of God, the same God of Mohamed. But I believe that there is a wide gap between Jesus Christ and Christianity. As for Abraham, even if the Bible does not explicitly mention that he used to fast or give to the poor; of course he did. Can you imagine any righteous prophet or messenger of God not doing these two practices? The fifth pillar, hajj, is not a biblical teaching of course, because Abraham built the Kaba with Ismail, not Isaac. So, it will be very unlikely to expect the Israelites to record anything about it in the OT. But that does not mean that the hajj was a legend or that it was fabricated to legitimize a connection from Mohamed to Abraham. The connection is there whether the Bible mentions it or not. Mohamed’s lineage directly connects to Abraham through Ismail. Ismail was a legitimate son of Abraham, because Hagar (Ismail’s mother) was Abraham’s wife (Gen. 16:3). Here again, is another example of how you take only what the Bible says as the only credible historical account. Anything else is a “legend” to you, even when it does not contradict with the Bible! The message of Christianity as you explain it, is where we differ. I don’t see the story of Abraham and his son the way you see it. Abraham was required by God to conform to His commands in order to win His grace, and that’s the true message. Humans are required to believe and to do righteous deeds in order to deserve God’s mercy and win His heaven. Abraham was not ordered to kill his son as an act of sacrifice; does the Bible tell us that? Abraham was ordered to kill his son only as a test from God. He set out to sacrifice his son for the sake of God in fulfillment of God’s command not as a redemption of some sin he committed (“Some time later God tested Abraham…” Genesis 22:1). The order was a commandment to test Abraham’s faith and when he showed great faith and obedience and readiness to take action and fulfill the commandment, God granted him his blessings and mercy because he passed the test. When was death or human blood the only way to redeem sin? As a matter of fact when Adam committed the first sin by disobeying God’s order not to eat from the tree, God only taught him a few words and told him to pray to Him in those words and God forgave him. God is so generous and merciful that some of our sins can be forgiven by Him just by asking Him for forgiveness with sincere intention. We can never be exempt from acts of worship and acts of righteousness. These acts are the direct manifestation of our faith. How can one claim to have faith in God when his actions don’t translate into righteous deeds? Acts of worship are not human inventions. They are specifically defined by God Himself. Only God defines how he is worshiped. The five pillars of Islam are made up of the declaration of faith (first pillar) and the rest (regular ritual prayers, the fast, charity, and hajj) are predefined acts specified by God Himself which are the true translation of the faith as declared in the first item. God does not have a son, and even if he did, I cannot imagine God sacrificing his son for our sake. The Christian doctrine of the crucifixion doesn’t make sense! Typically, sacrificing something means losing it.. that is, you lose the thing you sacrifice forever.. it’s gone! But was Jesus sacrificed as far as God is concerned? According to Christianity Jesus is still alive and he is with us to this day.. We only don’t see him. God didn’t lose Jesus, we did not lose Jesus, and so, how was there a sacrifice? Who sacrificed who? Islam’s answer is very simple. We are required to believe in God and to obey his commands. Believing in God but disobeying his commands is sin. God forgives sins if we repent and redeem them. God gave us plenty of ways for redemption. Some sins can be redeemed by simply asking god with true sincerity for forgiveness. Other sins are redeemed by some sort of sacrifice. The crucifixion cannot be considered in any way as God’s way of redeeming our sins. No one was really sacrificed in the crucifixion. God did not sacrifice Jesus just as Abraham did not sacrifice his son. We are always required to perform acts of worship and acts of righteousness and follow God’s commandments, and can never rely on faith alone. God’s grace cannot be given to us unless we follow His terms. Otherwise what is the meaning of sin?! Sin is the disobeying of God’s commands. If we are not required to obey some commandments or to do certain acts of worship then there is no way for us to sin in the first place. If I am required to only believe, then I have a blank check. Adolf Hitler and Mother Teresa can be next door neighbors in Heaven. Is that what you mean by God’s terms? One term only: accept the blood of Jesus as a cover of your sins? God’s terms are his specific commandments and acts of worship and righteousness which He defined for us in his truly revealed Holly Book (not anything that we invent for ourselves). Our duty is to find God’s message, believe in it and conform our lives to it, and when we sin (and we will most certainly do) return to God, ask for His forgiveness, and repent. He will most certainly forgive us. This is salvation based on grace, on God’s terms, not on our own terms. I am fully open to discussing and understanding our differences, however I do not expect them to be reconciled. I respect you and all Muslims as being created in the image of God. I believe that we can disagree and still live in harmony. The “formative period” of Christianity is obviously a bit different than Islam’s because of the status ascribed to its leader. Christianity is different from any other world religion because Jesus was God in human flesh. He didn’t have to figure things out, or form a religion. He was the center of the religion and taught people to look to his perfect example. The formative period of Christianity began under the pretexts of Judaism. It was the fulfillment of the prophecies of Judaism. The early Christians greatest challenge was adapting Judaism into a secular Roman world. While Paul did much to expand the Church and describe its theology, he did nothing to change its theology. You keep bringing this up, but it is simply not true and you offer no proof. If you want to test Paul’s theology, place it up next to the teachings of Jesus in the Gospels, or the books by Jesus’ closest disciples, Peter, James, and John (1 & 2 Peter; James; 1, 2, & 3 John, & Revelation). Feel free to question doctrines and ask me. I would love to explain and help you reconcile the assumed differences. You must recognize the difference between doctrine and practice. The doctrine, or beliefs about who God and Jesus are was already understood by the time Jesus ascended. What was developed in the formative period was the practice of the religion. Islam has had the same process. There are always areas of life that religion speaks to, but does not explicitly speak about. For instance, human cloning is not specifically addressed in the Bible or the Qu’ran, but from a reading of the scriptures, we can understand a proper response within our religious contexts to human cloning. The New Testament is not a product of sects. The sects developed later and were formed based on previously held beliefs such as Gnosticism. Gnosticism already existed before it took form as a pseudo-Christianity, but it began adopting Christian teachings to suit its purposes and in attempts to legitimize it. Calling Gnosticism “Christianity” is no different than calling Mormonism “Christianity.” They look similar at first glance, but if you really look at them, they are completely different religions. We don’t have any writings early enough from other “sects” to be considered for opposing mainline Christianity. I urge you to read up these Gnostic “gospels” next to the New Testament and see how different they are. You cannot say the Qu’ran was completed during Muhammad’s life. It was not compiled until after his death and only then from oral tradition, which scholars outside of Islam will tell you is far from superior to written texts. The difference in how we view Christianity lies in our understanding of grace. In Christianity, grace is not something to be earned. Abraham’s sacrifice in and of itself did not earn him grace. That would make grace dependent upon an act of man. Grace is wholly dependent on God. There is no act we can do to deserve grace. Hopefully, you can recognize that as a Christian, I am more qualified to explain what Christianity is and believes as you as a Muslim can better explain and understand the beliefs of Islam. As for the story of the hajj, are there copies of this written in literature dating prior to 1000 B.C.? If not, we have little choice but to consider this an oral tradition and a legend. It does not explicitly contradict the Bible, and it is possible that there is some truth in it, but I find little reason to believe it. Forgiveness is where our religions seem to differ. In Judaism and Christianity, a blood sacrifice was always required for forgiveness. In Christianity, Christ is the final blood sacrifice for all time. Islam seems to have nothing close to this doctrine that I’ve heard (feel free to correct me if I’m wrong). The purpose of the blood sacrifice was to show our need for God’s mercy and how this was a sacrifice impossible for us to make. Our sins cannot be redeemed apart from Christ’s sacrifice. The blood sacrifice was all a foreshadowing of the death of Christ on the cross. When you say: “If I am required to only believe, then I have a blank check. Adolf Hitler and Mother Teresa can be next door neighbors in Heaven. Is that what you mean by God’s terms? One term only: accept the blood of Jesus as a cover of your sins?” the answer is yes! That is the beauty about grace. No matter how far we stray from God’s will, He is always waiting for us to come home. He is always loving us, even when we don’t deserve it (which we can never deserve it). That is why I say it is only God’s grace and that Islam does not have the same picture of grace. You are right in saying this, however, our responses and understanding of what grace is between Islam and Christianity are far different. Christianity’s grace is far more dependent on God. The thing about grace is, that it can’t be earned as I said before. Islam describing grace based on works is not grace at all. Grace requires nothing of us. It is a free gift of God. This gives us a higher view of God and His character. Ok, so Christianity is the religion in which Jesus is God in human flesh, and Jesus did not have to figure things out, or form a religion. Well, even when you describe Christianity in that way, you are describing a religion that was not there before Jesus’ departure. Judaism of old and Judaism of today does not recognize any of this. Judaism recognizes a Messiah, but not a man who is God or semi-God. Because, after all, if Judaism does accept what you are saying, then we wont have two separate religions, Christianity and Judaism. We started to have these two separate religions the moment the doctrine of Christianity (including the divinity of Christ and salvation through Jesus’ blood, etc.) started to be formed after Jesus’ departure. This doctrine formed as a result of a process which involved different groups of people with different opinions and views. These views were varying on a wide scope of issues including the nature of Jesus himself and whether he was human or divine or both. Eventually, one side overcame the others and that’s what became Christianity. You cannot refer me to the Gospels to show that the divinity of Christ is an essential doctrine taught by Jesus, because the Gospels themselves are a product of the formation process of Christianity which Jesus had nothing to do with. We don’t have the truth of what Jesus said or did, we only have one version narrated by one side of the debate, the side which turned out victorious above the rest. When I say the rest, I am not talking about just the Gnostics. You keep bringing the Gnostics again and again as if they were the only ones out there during early Christianity. I am talking about the others who were lost. We know that early Christians were persecuted, and I am sure some of those were persecuted just because they believed Jesus was the Jewish Messiah, not because he was divine or that he came to die for people’s sins. If these people were the ones who survived instead, your Bible would have nowhere in it that says or alludes to Jesus being divine or that he died for peoples sins or even that he was crucified. So, once again, we cannot test Paul’s theology by placing it up next to the teachings of Jesus in the Gospels simply because the Gospels themselves are the product of Paul’s school of thought. So those teachings of Jesus that are found in the Gospels are most probably carefully picked, and may be twisted, in a way to eliminate any contradiction or discrepancy. So, please suggest a different criterion for testing. As for the completion of the Quran, I most certainly can say that the Quran WAS COMPLETED DURING MUHAMMAD’S LIFE! When the only thing that remains to be done to a book is to compile it into a single volume that does not mean the book was not complete! God completed His revelation of the Quran to Mohammed before Mohammed’s death. Mohammed gave instructions on how the Quran is to be arranged. His followers preserved every word of the Quran both in their memory and in writing. The only thing there was remaining to be done was to compile the pieces according to Mohammed’s instructions into one volume. As for oral transmission… When focus is only on relaying the jest of the information, oral transmission is inferior. But in the case of the Quran the emphasis was on an easy transmission of the Quran while at the same time preserving every single word of it. The Quran was memorized word by word, and handed orally from one generation to the next. I am absolutely confident that the way the Quran was orally transmitted is far more superior to written text. For one, it is much easier for scribes to make copying mistakes and once these mistakes are made it becomes difficult to debug them. In the case of oral transmission, when a passage is recorded in memory (with continuous rehearsal) it is hard to be forgotten, and if mistakes are made it is very easy for listeners who memorize the same passage to spot the mistakes and correct them. I wonder how those scholars outside of Islam whom you are referring to could explain the fact that the Quran, which is transmitted mainly orally, was preserved entirely over a period of more than 14 centuries without a single transmission error, while the Bible which was transmitted in writing has all the copying mistakes that it does. As for the difference on our view of grace, I will partially agree with you. You say “grace is not something to be earned”. I agree that our life here on earth is a great grace from God, and we did not earn it, it was bestowed on us by the grace of God. So, in that since, yes I agree. But we are not talking about this now.. we are talking about an extra stratum of grace. We are talking about forgiving sin. Nothing we do can pay back for the grace of God which which we are enjoying just as we stand here and now in this life. Yet, God promises us more. He has prepared heaven for us wherein we will have much more than there exists in this world. That promise too is granted to us by God’s grace, but we need to believe in God and obey Him. But even if we disobey Him and sin, God will be waiting for us with his overwhelming grace and will accept us when we repent to him and redeem our sins. So, God’s grace is overwhelming in many ways: There is the freely-granted grace of life and its pleasures here on earth. There is the grace of heaven and its inconceivable joy and pleasure which we obtain when we believe and obey. And there is the grace of God’s forgiveness which we automatically receive when we repent. The reason why you find little reason to believe the story of the hajj is simply because it is not mentioned in the Bible. Had it been there, you won’t find a problem believing it, and you won’t ask for a written account anywhere else. Where can I find something written about this for you when it is well known that the Arabs have been an illiterate nation all throughout their history until Islam came?!! Before that they had never had historians, and no historian is known to have been interested in their history! The only thing that you can expect to find written before Islam might be a few verses of poetry. So, please don’t ask about written proof when it comes to events in Arabia prior to Islam, and please have a little more respect to oral Arab/Islamic tradition because in many instances it is more trustworthy than Christian/Western written tradition. As for blood sacrifice, are you saying that the element of blood has to be their in the sacrifice? Does blood as an element play a role in achieving redemption for the sin? In Islam the Arabic word used for “sacrifice” is “tad’heya” (تضحية) and the notion is that you give up something of your possession. That’s the general meaning of the word. In religion, when you make a sin, the way to redeem the sin is to give up something, i.e. sacrifice something that’s dear to you. How significant that something is depends on how big the sin is. When the sacrifice involves blood it is called “od’heya” (أُضحية). Islam does not use these terms frequently. The more frequent term is “kaffara” (كفّارة) which carries basically the same meaning but does not involve blood sacrifice. Typically, kaffara is in the form of charity to the poor, or fasting a number of days (in fasting you sacrifice your desire for food). In general what I understand the purpose of the sacrifice in Islam (whether it involves blood or not) is to tame one’s own self by depriving it of something dear and therefore reminding it that the topmost priority is obedience to God not obedience to its own desires. God does not need our sacrifices, it is we who need to purify ourselves and reminded ourselves of the correct order of priorities. Your answer to my hypothetical example of Adolf Hitler and Mother Teresa was very bold! But I am sure it is not what God wants to tell us. Because if that’s God’s message, then don’t blame anyone if they act with great mischief, they can kill, steal, cheat, do whatever they wish, and they have full right to do so because God gave them the ok and he will even give them heaven after they die just as long as they accept that Jesus died and shed his blood to cover for their sins. Who dares to say anything against the Crusades anymore, it’s all ok by God! It is their victims who shall burn in hell for ever. Judaism’s rejection of Jesus does not mean what you take it to mean. They expected a conquering king that would free them from the Romans, but the prophecies of the Messiah can be divided into two categories, the Suffering Servant and the Triumphant Ruler. Christ fulfilled the Suffering Servant role when he came the first time. He said he would come back to fulfill the second role which is understood to have begun even now as he reigns in the hearts of his followers. Consider this; the only reason for Judaism’s continuation is their rejection of Jesus as Messiah. If Jews as a whole embraced Christ as Messiah, there would be no Judaism. So, Judaism’s take on the Messiah today is really not relevant (assuming Jesus is the Messiah) given that they completely missed him! Take for example a man who says “The world is flat.” A thousand years ago, we consider him to have a relevant and perhaps even scientific idea. Today, we know better and we don’t call him a reputable scientist! Blood sacrifice was the basis of God’s religion even prior to Judaism (see Gen. 4:3-5). Judaism revolved around the blood sacrifice. Every year the high priest would give a blood sacrifice in the temple and it would be burned up on the altar by God if He accepted it. The last year God burned the sacrifice was prior to Christ’s death. Christ’s sacrifice was the final blood sacrifice and the end of Judaism. The implication is that Christ’s blood covers the sins of all who follow him. While we still repent when we sin we don’t atone for our sin by an action or by giving something up. Fasting and sacrifice are a part of life as a Christian, but are not part of the forgiveness process. Islam attempts to make atonement for sin, and where there is personal atonement, the effect of grace is negated. A question I would ask you to ponder is this: Who’s Messiah are we speaking of; God’s Messiah or the Jews’ Messiah? Is it not up to God to decide who and what the Messiah is? Perhaps he didn’t give the Jews a full description of the Messiah in advance. He did give them enough to know that the Messiah would come before 70 AD when the temple was destroyed by Rome. Where are these “lost Christianities” of which you speak? Could it be that they didn’t exist? It’s really a rather absurd argument to say that you believe something, but it’s too bad we destroyed all the evidence. Your argument about excluding the Gospels from our testing criterion is equally as absurd. You have not shown me one point which Paul disagreed with the Gospels. He wrote some of his letters before the writers of the Gospels. Why did they never address his “false teachings”? Did he dupe the disciples too? I think not. You’re really grasping for straws. I don’t believe the story of the hajj, not because it’s not written in the Bible, but because it’s not written in any source I am aware of within a thousand years of when it would have happened. Abraham lived over 2000 years before the founding of Islam. It is possible it happened, but I don’t think it is likely to be any more than a legend. While my answer to your hypothetical example of Adolf Hitler and Mother Teresa is bold, it is also true. There is no sin too great for God’s forgiveness. This is not to encourage an abundance of sin, but rather for people to know that nothing they do can ever separate them from the love and grace of God. Does that mean some people who did terrible things will be in Heaven? Yes it does. But sin is not measured by God on a scale. One sin is enough to merit us eternal separation in Hell, but one confession and one name, the name of Jesus Christ, is enough to wash away any and every sin. This is true, but there are a couple problems with this. The New Testament specifically warns against this notion, and man does not know when his end will come. While he may be able to turn and repent before it’s too late, he could also die unexpectedly. On the subject of the Qu’ran being transmitted without error, I will concede that it may have been copied well (although I couldn’t personally say I’ve read much on this). I don’t think you can ignore the fact that there were several copies burned when the Qu’ran was compiled to keep from having discrepancies. I’m still not convinced the human memory of oral tradition is as reliable a method of transmission. Consider what would happen if you cut out one single generation and had no written copies. Everything would be lost! While it is quite an accomplishment to memorize the whole Qu’ran consider that attempting to memorize the Bible, or even the New Testament, would be a far more difficult task. Are you saying that atonement is only possible for those who repent before they die? Is this the Christian position on salvation? In other words if you do not repent, then “It can be too late”. Is not repentance the act of refraining from sin, feeling of regret at committing sin, and being righteous? You see, all these acts are work, combined with faith, and this is exactly the Islamic position on God’s forgiveness. It is a favour and a Grace, which only believers can realise. It is more likely to be bestowed on those who are righteous, but it is not exclusive to them, as God can bestow his redemption and forgiveness on anyone, the only people excluded from God’s favour in the Last Day are those who ascribe partners to God in his supremacy and those who deny God altogether and disbelieve his Messengers. The Quran was transmitted as an oral AND written text, the primacy being for the oral transmission. To elaborate, Muslims had the written text as compiled by the companions headed by Zaid son of Thabit during the reign of Abu Baker, the first successor to the prophet (ص). Abu Bakr died 2 years after the death of the prophet (ص). Osman, the third successor to the prophet made copies of the compiled text and sent them to the different regions of the expanding Islamic land. Osman reign started only 12 years after the death of the prophet. The orally transmitted Quran had to conform to the written text. This had the effect of preserving elements of the text like vowels, dots and accents which were not used in writen Arabic at the time. The omission of vowels and accents from written text was the norm for Arabic and also for Hebrew, which followed the Arabs in introducing them to the Hebrew Bible around the 8th century. Yes, I’m saying the atonement is only possible for those who repent before they die. I say that with the understanding that this only applies to those living post-crucifixion. Repentance is the act of recognizing sinfulness and our inability on our own to do anything about it. It doesn’t include refraining from sin, although that should follow. You must understand there is the act of repentance as it pertains to salvation at a single moment, and the continual act of repentance, since we do not become perfect. Repentance is not considered a “work” in regards to salvation. Repentance is a recognition that not a single work we could offer to God would be worthy. To be saved, we must repent (within our lifetime) and begin a relationship with Jesus Christ. We then understand that Jesus’ sacrifice is the only way we can be saved. There is no work involved in this salvation. The works follow from a relationship where the nature of Christ indwells the believer by the Holy Spirit. So Abdo was right when he says you can do as you please sure in the knowledge that you are saved as long as you acknowledge that what you do is wrong. You can steal, kill and do every sin known to man, and continue doing it and just acknowledge your sinful way and be saved. Don’t you see that the in the verse you quoted (John 17:3), Jesus is making a clear distinction between himself and God? Jesus is clearly saying he was sent by God, thus, he is subject to the will of God. There is in no way an encouragement to do as you please and expect salvation. The thing is, if you’re doing what you please, you may not have really trusted Christ for salvation. We are not talking about a blanket license to sin here as you seem to suggest. Consider the example of David. He was a murderer and an adulterer. He repented and turned back to God afterward. This is not to say we should do whatever we want, but that even when we sin, it is never too late to turn back to God. Jesus was sent by God and followed God’s will. He chose to follow. There is a distinction between the Father and the Son, but that does not mean they are fully separate. The Son is the incarnation of the Father; visible to mankind. The Father cannot be seen by man. When we look at Judaism’s reaction to Christ we find it was a mixture of two reactions on two levels: a) reaction of the average Jew, and b) reaction of the Jewish leaders. While average Jews believed in Jesus and accepted him as the Messiah, Jewish leaders, on the other hand, rejected him and decided to get rid of him. Average Jews believed in Jesus and accepted him because they saw and heard his miracles by themselves. With the aid and power of the marvelous miracles which Jesus was given by God, he had no trouble gathering followers everywhere he went. Regardless of what the average Jew’s conception of what the Messiah would have to be (and here I wish to second what Rasheed said that ‘Messiah’ only means ‘anointed one’ and it could involve kings, prophets, holy men, etc.), it was clear to all those who believed in Jesus that Jesus was the Messiah. Rejection to Jesus came not from the Jewish people but rather from the Jewish leaders. The reason I believe why the Jewish leaders rejected Jesus is very simple: his reform plan meant that those leaders will be the first to go. That is, in order for the Messiah to really complete his mission, the first thing that had to be changed was the Jewish leadership of that time. At the essence of his mission as a Messiah, he was a reformer. And to the Jewish leader’s surprise he started with them, not the Roman Empire (where any Jew at that time would normally direct his antagonism). This, in my view, explains why their opposition to him aimed at diverting the issue from its true nature where Jesus, as the Jewish Messiah, is trying to reform the affairs of the Jewish people, into a case in which Jesus is portrayed as a threat to the Roman Empire by claiming he were King of the Jews, etc. In short, Jesus was largely accepted by the Jewish people as the Messiah, but rejected by their leaders because he was an existential threat to them. Were it not for the Jewish leaders, all Jews would have followed Jesus and accepted him as the Messiah. Only after Jesus’ departure that the new doctrine of Christianity starts to emerge. Jesus’ divinity, the crucifixion as sacrifice for human sin, the trinity, etc., are all new doctrine based on later interpretations of the last events of Jesus’ life, later interpretations of Jesus’ teachings, and later interpretations of the OT scriptures. These issues were the subject of debate among the generation following his departure, not between him and the Jewish leaders! As a matter of fact, the process by which NT scriptures were transmitted (and translated) leaves obvious traces of such debates and how some passages were written and/or translated in a way to support the new doctrines against their opposition. Therefore, when you say “If Jews as a whole embraced Christ as Messiah, there would be no Judaism,” I say, on the contrary, that if Jews as a whole embraced Christ as Messiah, there would be no Christianity; there will be “reformed Judaism”. You say “Blood sacrifice was the basis of God’s religion even prior to Judaism (see Gen. 4:3-5).” It is very interesting that you brought mention of Gen 4:3-5 for two reasons: a) it is the only place where blood is involved and therefore more evidence is required to establish your point about blood sacrifice being “the basis of God’s religion”, and b) which is more important, this specific passage shows that in fact fruits were also used for offering by Abel. So, there seems to be one more “basis” for God’s religion. As a matter of fact Gen. 4:3-5 does not talk about sacrifice in the sense you take it to mean. Neither the animal nor the fruit was offered as a sacrifice for sin! The original Hebrew word used for the word “offering” is “minchah”, and the Arabic Bible uses the Arabic word “قربان” which is also the same Arabic word used by Allah in the Quran in its narration of the same story. And by the way, the word “minchah” is almost identical to another Arabic word “منحة” which means exactly the same thing. None of these words imply that the offering was made to redeem for sin. The offering was basically a symbol of obedience and loyalty to God Almighty, and God did not dictate what sort of offering it had to be. The most important thing is that it be sincere; i.e. that it is backed up by good acts (“If you do well, will not your countenance be lifted up? And if you do not do well, sin is crouching at the door…”). Interestingly enough, the passage indicates that it was actually “sin” itself that prevented the offering from being accepted by God. It does not make sense for God to tell Cain that the reason his sacrifice was rejected is that he had sins crouching at his door, when the sacrifice was made to redeem for sin in the first place! So it is like two persons want to apologize to me for something bad they did to me, and each one of them brings me a gift; I accept the gift and apology from one of them, but not from the other.. And when the other asks me why you did not accept my apology I say to him I did not accept your gift and apology because of what you did to me! Does that make any sense? Why did I accept the apology and gift from the first person, then?!!! Therefore the way this passage should be understood is that Abel and Cain were not offering sacrifice for sin, they were simply making an offering to show loyalty and obedience to God Almighty. God accepts our loyalty but first we have to cleanse our sins. We cleanse our sins by repentance and by stopping those sins. That is the message of Gen. 4:3-5, and it can in no way be understood to mean that blood sacrifice was the basis of God’s religion as you say. Furthermore, in Judaism only a few sins required animal sacrifice. According to the Torah, forgiveness for an intentional sin could only be atoned for through repentance, not through an animal sacrifice (Psalms 32:5, 51:16-19). Repentance is for one to recognize his sin, turn to God for forgiveness, and refrain from going back to that sin. Animal sacrifices were prescribed in the Torah for only unintentional sins (Leviticus 4:2, 13, 22, 27; 5:5, 15 and Numbers 15:30). The one exception was when an individual who was accused of theft swore falsely in an effort to gain acquittal (Leviticus 5:24-26). You say “Islam attempts to make atonement for sin, and where there is personal atonement, the effect of grace is negated.” I say no, atonement does not negate the effect or need for grace. Atonement for sin is required in order to be accepted by God and to be eligible for the next level of His grace; that is Heaven. Cain’s sins were the obstacle between him and God. And instead of cleaning his sins, he committed even more sin by killing his brother Abel, which deserved him more rejection from God (“And now art thou cursed from the earth, which hath opened her mouth to receive thy brother’s blood from thy hand; When thou tillest the ground, it shall not henceforth yield unto thee her strength; a fugitive and a vagabond shalt thou be in the earth.” Gen. 4:11-12). I did not talk about “lost Christians”. I talked about the early generation of Christ followers who were being persecuted. I cannot describe them as Christians because Christianity is what evolved after they were gone. We don’t know what the religion would have looked like or have been called had these followers remained. But we know very well that the majority had vanished under Roman-Jewish persecution. We also know that a wide range of theological difference existed among them. We know that the way Jesus departed this world was not conceivable to most of them, and major differences of opinion emerged among them regarding the nature of Jesus and his message. We know that many followers fled to surrounding areas and there theologies may have not been part of the debate, or their theologies may have been branded as apocryphal or heretical. As for Paul and the Gospels, it is really strange that you are expecting me to show you a point which Paul disagreed with the Gospels!! This is really funny because that’s exactly my point: Paul does not disagree with the Gospels simply because he and the Gospel writers belong to the same school of thought. You say “I don’t believe the story of the hadj, not because it’s not written in the Bible, but because it’s not written in any source I am aware of within a thousand years of when it would have happened.” You sound as if you want us to believe that if you found the story written in any such source you would right away believe it! I honestly doubt this! You are asking for a written account of the story when you are fully aware that such an account would very unlikely ever exist, and I explained to you why it is almost impossible to find a written account (that Arabic history before Islam is mainly oral not written). You say “The New Testament specifically warns against this notion, and man does not know when his end will come.” I say why would any believer in Christ’s sacrifice on the cross ever care about what the NT warns about when you have given believers a blank check? If I have a blank check, do you think I would care whether I will die expectedly or unexpectedly?!! I will care about one thing only: to enjoy all the pleasure I can get from this life and never care at all about what other people think of me. Who are they to tell me what to do and what not to do when I have insurance from God Lord of the Heavens and Earth, as you propose? I will have all my sins covered up for me, as you propose. I shall enjoy all the pleasures of this world and when I die I will even enjoy the greater pleasures of Heaven. I am God’s son and he loves me, isn’t that right? As a matter of fact, such a person who has the guarantees which you are speaking of should wish to die the sooner the better because death will bring him even greater joy and pleasure and take him away from the sufferings and miseries of this world. As for the Quran and its transmittal, please tell me more about the “several copies burned when the Quran was compiled to keep from having discrepancies.” What sort of discrepancies are you referring to? And which single generation was cut off from the transmittal of the Quran? Transmission of the Quran (both orally as well as in writing) started during Mohamed’s life as it was being revealed on him. So which generation are you referring to? I am interested in the history of the early church and how corruption distorted the original teachings of jesus. I am a christian who can not understand the basis of the Trinity as there is only one God. It is very sad that the Ebonites( the original followers of Jesus) were treated as heretics. I have read many books on this subject and I can not believe the dark history of the church. Look forward to hearing from you. The Trinity is a biblically based doctrine that came about through the need for the church to understand the relationship between Father, Son, and Holy Spirit. There are many good books you can read, but I would go to a better source to ask. Rasheed could tell you about early Islam, but if you want information about the early church, I would suggest contacting a seminary professor at a solid, evangelical seminary. There are several in the US… Southwestern, Southeastern, Beeson, Southern Baptist (SBTS), Trinity, Dallas Theological (DTS), etc. If you were looking for advice on how to fix your car would you go to a mechanic or a lawyer? P.S. Don’s link is going to steer you in the wrong direction. The apostasy of the LDS church is far more severe than anything in evangelical Christianity. Sorry Don. Welcome to my blog, and thank you for your comment. I obviously share your views on the unity of God, and I believe too that the true message of Jesus was corrupted. I am sure that you did not reach your views through reading my blog, and I trust that you can discuss freely, a variety of opinions including those proposed by Don, myself, and of course the official church view(s)on the trinity and history of Christianity or Islam. My own take on the subject is that, If you remove the trinitarian theory from Christianity, you will discover that the religion that God revealed to humanity has always been the same: Worship God alone, follow his messengers, and work for your hereafter. Christians believe in the unity of God. We also believe in the Trinity. Muslims understand the Trinity as a separation (Tritheism), but Christians understand it as three parts of the same whole -similar to a head, torso, and legs being part of the same body. These parts have separate functions, but all must work in relation to one another. The above articles discuss different understandings of the Trinity that are not what the Bible teaches. There are other large differences between Islam and Christianity, especially when it comes to understanding man’s condition, salvation, and the meaning of grace. Throughout history, there were so many different ways the Trinity has been understood and explained by different Christian theologians. Each party accusing its opponents of being heretics. A great deal of ‘Guesswork’ need to be applied in order to explain the trinity. It is a mystery that no one ever seemed to have understood. Each explanation raises more questions than it answers. To say that there have been different ways of understanding it implies that there is not one true way to understand the Trinity. There is one true way to understand it, which is why those who have challenged the doctrine have been forced out of the church. I daresay your difficulties with the Trinity come from the fact that you would be classified as a Tritheist, which is not Trinitarian at all. I will read your post though and see if I can’t aid your understanding. A thorough understanding of the Trinity might just change your mind! Why do evangelicals consistently attempt to deny people the opportunity to find out for themselves what is true? Why would you deny Denise the opportunity to research her question from the point of view of an LDS scholar and apostle? Is it better she get her information on our beliefs from apostate mormons or anti-mormon members of other faiths? If it’s not true, she’ll figure it out. If it is true, isn’t that what we should be searching for? Why are evangelicals so threatened? Do they really believe that the children of men are so stupid that they can’t find out the truth for themselves? Or perhaps you consider yourself an expert on our beliefs, above and beyond our own scholars and leaders. There is little truth to be found in the LDS church. I’m not denying anyone the “opportunity”. I’m just telling the truth. I don’t send people down dead end streets. It’s not that evangelicals are “threatened” by Mormons, but we are called to expose what is not true and Mormonism has strayed far from biblical truth. Have your own scholars and leaders told the truth about the Book of Abraham yet? I don’t trust them to tell the truth if they won’t even allow for the fact that Joseph Smith’s translation of the so-called “Book of Abraham” was completely fictitious and fabricated. This single fact calls everything else about Joseph Smith and his religion into question. Can we at least agree upon this? My post was nowhere near akin to “liar, liar, pants on fire.” The fact is, Joseph Smith falsely labeled and translated an entire book that was foundational to Mormon beliefs. What else did he lie about? The issue is not about talking about, rejoicing in, preaching, or prophesying of Christ. The issue is deeper than that. It is, when we speak of Jesus Christ, to whom are we referring? Is He the Son of God? Is He God? What part does He play in the Trinity? Unless we can agree on these foundational issues, we don’t talk of, rejoice in, preach, or prophesy of the same Christ. Christ spoke many times of other Christs or anti-Christs. These don’t have to be physical people. There are many anti-Christs even among so-called Christianity. When Rasheed speaks of Christ or when Jon Dominic Crossan speaks of Jesus, they are not referring to the same Jesus of the Bible. They are referring to a Jesus who did not die or was not raised from the dead… who performed no literal miracles… or who was a prophet only and not the Son of God and God himself. You may or may not be familiar with the Evangelicals and Catholics Together or the Gospel of Salvation documents signed by leaders of the evangelical and Catholic churches. They thought that after 500 years of separation they were finally agreeing on the gospel, but the Catholic church knew the document was vague enough to not give any ground while leading hopeful evangelicals to believe they were suddenly on the same page in what was only a long misunderstanding. The truth of the matter is, while there may be things Mormonism and evangelical Christianity hold in common, our common ground is not enough to walk together on. Your gospel is not the same and is no gospel at all. I know this may sound harsh and a scathing commentary on the Mormon faith, but I hope you understand I am in no way personally attacking you. It is my responsibility to not only avoid error in my own right, but also to be on my guard against it and expose darkness and lies to the light of truth. I wish you well and hope you may come to know the fullness of Christ in the faith of the disciples and apostles.
2019-04-26T04:42:13Z
https://hardquestions.wordpress.com/2007/12/15/thoughts-on-the-crucifixion/
BERMUDEZ FIGUEROA, EVA was born 27 December 1938, is female, registered as Republican Party of Florida, residing at 9795 Bayside Ct, Spring Hill, Florida 34608-3827. Florida voter ID number 104451969. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 December 2018 voter list: EVA BERMUDEZ FIGUEROA, 8469 SWISS RD, SPRING HILL, FL 34606 Republican Party of Florida. 31 July 2017 voter list: EVA BERMUDEZ FIGUEROA, 8469 SWISS RD, SPRING HILL, FL 346061147 Republican Party of Florida. 29 February 2016 voter list: EVA BERMUDEZ FIGUEROA, 1101 LODGE CIR, SPRING HILL, FL 346065038 Republican Party of Florida. BERMUDEZ-FIGUEROA, EVA E. was born 2 January 1977, is female, registered as Florida Democratic Party, residing at 7531 Wabash Trl, Spring Hill, Florida 34606-5038. Florida voter ID number 114010683. Her telephone number is 1-352-200-7603. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Figueroa, Jose Ramon was born 8 June 1978, is male, registered as Florida Democratic Party, residing at 2458 Oak Hollow Dr, Kissimmee, Florida 34744. Florida voter ID number 119877687. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 December 2017 voter list: Jose Ramon Bermudez Figueroa, 3970 PEMBERLY PINES CIR, St. Cloud, FL 34769 Florida Democratic Party. BERMUDEZ FIGUEROA, LUIS ARMANDO was born 8 January 1990, is male, registered as No Party Affiliation, residing at 4231 Maplehurst Way, Spring Hill, Florida 34609. Florida voter ID number 125147140. This is the most recent information, from the Florida voter list as of 31 March 2019. BERMUDEZ FIGUEROA, MIGUEL ANGEL was born 17 October 1958, is male, registered as Republican Party of Florida, residing at 9795 Bayside Ct, Spring Hill, Florida 34608-3827. Florida voter ID number 121063397. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 December 2018 voter list: MIGUEL ANGEL BERMUDEZ FIGUEROA, 8469 SWISS RD, SPRING HILL, FL 346061147 Republican Party of Florida. 29 February 2016 voter list: MIGUEL ANGEL BERMUDEZ FIGUEROA, 1101 LODGE CIR, SPRING HILL, FL 34606 Republican Party of Florida. Bermudez Flores, Gumersindo was born 29 August 1956, is male, registered as Florida Democratic Party, residing at 5017 Chalet Ct, Apt 309, Tampa, Florida 33617. Florida voter ID number 114619887. This is the most recent information, from the Florida voter list as of 31 May 2012. Bermudez Francis, Daniel Alejandro was born 9 August 1993, is male, registered as Republican Party of Florida, residing at 18209 Creekside Preserve Loop, #102, Fort Myers, Florida 33908. Florida voter ID number 124959671. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Frau, Alberto Jose was born 13 January 1981, is male, registered as No Party Affiliation, residing at 1427 Ne 17Th Ct, Ft Lauderdale, Florida 33305. Florida voter ID number 120872399. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 July 2016 voter list: Alberto Jose Bermudez Frau, 4740 W Atlantic Blvd, APT 201, Coconut Creek, FL 330636733 No Party Affiliation. 30 June 2015 voter list: Alberto Jose Bermudez Frau, 4740 W Atlantic Blvd, APT 201, Coconut Creek, FL 33063 No Party Affiliation. BERMUDEZ-FRAU, MARCOS ANTONIO was born 12 July 1982, is male, registered as No Party Affiliation, residing at 1639 Alshire Ct N, Tallahassee, Florida 32317. Florida voter ID number 105128830. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 July 2014 voter list: MARCOS ANTONIO BERMUDEZ, 1333 AVONDALE WAY, TALLAHASSEE, FL 32317 No Party Affiliation. Bermudez Fuller, Mercedes P. was born 27 March 1955, is female, registered as No Party Affiliation, residing at 7815 Camino Real, ##I416, Miami, Florida 33143. Florida voter ID number 109702370. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Fuster, Angel Manuel was born 28 March 1972, is male, registered as No Party Affiliation, residing at 15231 Sw 80Th St, Apt 512, Miami, Florida 33193. Florida voter ID number 126078049. His email address is ANGELBF1972@YAHOO.COM. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Galindo, Alfredo was born 4 March 1946, is male, registered as Florida Democratic Party, residing at 935 Nw 37Th Ave, Apt 11, Miami, Florida 33125. Florida voter ID number 125808277. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Garay, Bezaida was born 4 May 1953, is female, registered as No Party Affiliation, residing at 16596 Ne 3Rd Ave, Miami, Florida 33162. Florida voter ID number 110024693. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 October 2016 voter list: Bezaida Bermudez Garay, 16710 NE 9Th AVE, APT 607, N Miami Beach, FL 33162 No Party Affiliation. 31 May 2016 voter list: Bezaida Bermudez Garay, , , FL No Party Affiliation. 30 April 2016 voter list: Bezaida Bermudez Garay, 16710 NE 9Th AVE, APT 607, N Miami Beach, FL 33162 No Party Affiliation. BERMUDEZ GARCIA, DIANA IVETTE was born 14 March 1971, is female, registered as No Party Affiliation, residing at 1524 River Reach Dr, Apt 188, Orlando, Florida 32828. Florida voter ID number 125814391. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez-Garcia, Eliel was born 9 December 1980, is male, registered as Republican Party of Florida, residing at 751 Nw 207Th Ter, Pembroke Pines, Florida 33029. Florida voter ID number 123510552. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Garcia, Mercedes C. was born 6 May 1966, is female, registered as Republican Party of Florida, residing at 11981 Sw 35Th Ter, Miami, Florida 33175. Florida voter ID number 109676440. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Garcia, Oliver was born 22 June 1978, is male, registered as No Party Affiliation, residing at 4103 W Broad St, Tampa, Florida 33614. Florida voter ID number 122545508. His email address is obermudezfl@gmail.com. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Garcia, Victor Gustavo was born 30 July 1962, is male, registered as Republican Party of Florida, residing at 1418 W 44Th St, Hialeah, Florida 33012. Florida voter ID number 124321786. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 May 2018 voter list: Victor Gustavo Bermudez Gargia, 1418 W 44Th ST, Hialeah, FL 33012 Republican Party of Florida. Bermudez Gargia, Victor Gustavo born 30 July 1962, Florida voter ID number 124321786 See Bermudez Garcia, Victor Gustavo. CLICK HERE. Bermudez Garolera, Daniela was born 27 April 1997, is female, registered as No Party Affiliation, residing at 305 Nw 118Th Ave, Coral Springs, Florida 33071. Florida voter ID number 121593435. Her telephone number is 1-954-328-8856. The voter lists a mailing address and probably prefers you use it: 1231 Dickinson Dr Coral Gables FL 33146. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 January 2019 voter list: Daniela Bermudez Garolera, 1231 Dickinson Dr, Coral Gables, FL 33146 No Party Affiliation. 31 October 2016 voter list: Daniela Bermudez Garolera, 305 NW 118Th Ave, Coral Springs, FL 330714018 No Party Affiliation. 30 June 2015 voter list: Daniela Bermudez Garolera, 305 NW 118TH AVE, Coral Springs, FL 33071 No Party Affiliation. Bermudez Garzon, Miguel Martin was born 30 October 1971, is male, registered as No Party Affiliation, residing at 11362 Sw 4Th St, Sweetwater, Florida 33174. Florida voter ID number 115147415. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Gold, Anna Isabel was born 24 April 1994, is female, registered as No Party Affiliation, residing at 1643 Brickell Ave, Apt 3704, Miami, Florida 33129. Florida voter ID number 121603707. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Gomez, Reynaldo Ramon was born 5 December 1962, is male, registered as No Party Affiliation, residing at 3355 W 68Th St, Apt 176, Hialeah, Florida 33018. Florida voter ID number 125110052. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Gonzalez, Claudia B. was born 16 February 1989, is female, registered as Florida Democratic Party, residing at 15411 Sw 81St Circle Ln, Apt 312, Miami, Florida 33193. Florida voter ID number 121470338. This is the most recent information, from the Florida voter list as of 31 March 2019. BERMUDEZ GONZALEZ, EVELYN was born 10 October 1968, is female, registered as Florida Democratic Party, residing at 924 Massalina Dr, Panama City, Florida 32401. Florida voter ID number 120323841. This is the most recent information, from the Florida voter list as of 31 March 2019. 30 September 2018 voter list: EVELYN BERMUDEZ GONZALEZ, 101 BALDWIN ROWE CIR, PANAMA CITY, FL 32405 Florida Democratic Party. Bermudez Gonzalez, Lidice was born 7 February 1944, is female, registered as No Party Affiliation, residing at 2110 Sw 61St Ave, Miami, Florida 33155. Florida voter ID number 109629191. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Gonzalez, Miguel Angel was born 7 May 1972, is male, registered as No Party Affiliation, residing at 601 Risen Star Dr, Crestview, Florida 32539-6017. Florida voter ID number 106098781. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 January 2016 voter list: Miguel A. Bermudez, 601 Risen Star Dr, Crestview, FL 325396017 No Party Affiliation. 31 March 2015 voter list: MIGUEL A. BERMUDEZ, 4568 SCARLET DR, CRESTVIEW, FL 32539 No Party Affiliation. Bermudez Gonzalez, Nancy was born 21 March 1975, is female, registered as Florida Democratic Party, residing at 10940 Subtle Trail Dr, Riverview, Florida 33579-2338. Florida voter ID number 110704346. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 December 2013 voter list: Nancy Betancourt, 10940 Subtle Trail Dr, Riverview, FL 33579 Florida Democratic Party. Bermudez Gonzalez, Nashira was born 13 September 1984, is female, registered as No Party Affiliation, residing at 634 N 9Th St, Eagle Lake, Florida 33839. Florida voter ID number 124984719. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Gonzalez, Nelson Javier was born 30 May 1971, is male, registered as Florida Democratic Party, residing at 1119 W Circle St, Avon Park, Florida 33825. Florida voter ID number 122258900. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Grafal, Isabel Maria born 17 May 1941, Florida voter ID number 121654849 See Bermudez, Isabel Maria. CLICK HERE. Bermudez Guadalaupe, Elba Milagros born 15 September 1976, Florida voter ID number 122936182 See Bermudez Guadalaupe, Elba Milagros. CLICK HERE. Bermudez Guadalaupe, Elba Milagros was born 15 September 1976, is female, registered as No Party Affiliation, residing at 92 Sw 3Rd St, Apt 3005, Miami, Florida 33130. Florida voter ID number 122936182. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 January 2018 voter list: Elba Milagros Bermudez Guadalaupe, 2751 S OCEAN DR, APT 301 N, Hollywood, FL 33019 No Party Affiliation. Bermudez Guerrero, Leonardo M. was born 3 November 1994, is male, registered as No Party Affiliation, residing at 8874 Pebblebrooke Dr, Lakeland, Florida 33810. Florida voter ID number 122978483. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Guillen, Jacinto was born 3 September 1970, is male, registered as Republican Party of Florida, residing at 2521 Sw 92Nd Pl, Miami, Florida 33165. Florida voter ID number 123649283. His email address is bethania_meza22000@yahoo.com. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Guisandes, Yulimar was born 8 December 1976, registered as Florida Democratic Party, residing at 10030 Via Colomba Cir, Fort Myers, Florida 33966. Florida voter ID number 123066513. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 July 2016 voter list: Yulimar Bermudez Guisandes, 5208 Bywood ST, Lehigh Acres, FL 33971 Florida Democratic Party. 29 February 2016 voter list: Yuliman Berrudez Guisandes, 5208 Bywood ST, Lehigh Acres, FL 33971 Florida Democratic Party. Bermudez Gusman, Juan Domingo was born 2 August 1966, is male, registered as Florida Democratic Party, residing at 2145 Santa Maria Ave Se, Palm Bay, Florida 32909. Florida voter ID number 118361378. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Gutierrez, Cathleen was born 7 November 1975, is female, registered as No Party Affiliation, residing at 1610 Ellington Dr, Dundee, Florida 33838. Florida voter ID number 118945443. Her email address is cathleen.bermudez@live.com. This is the most recent information, from the Florida voter list as of 31 March 2019. 28 February 2018 voter list: CATHLEEN BERMUDEZ GUTIERREZ, 2260 ABEY BLANCO DR, ORLANDO, FL 32828 No Party Affiliation. 31 July 2016 voter list: CATHLEEN BERMUDEZ GUTIERREZ, 1309 FALLING STAR LN, ORLANDO, FL 32828 No Party Affiliation. 31 October 2015 voter list: CATHLEEN BERMUDEZ GUTIERREZ, 354 MIRASOL LN, ORLANDO, FL 32828 No Party Affiliation. Bermudez Gutierrez, Orisneldas was born 26 January 1976, registered as Florida Democratic Party, residing at 5653 Louis Xiv Ct, Apt 2, Tampa, Florida 33614. Florida voter ID number 124279285. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Guzman, Martin was born 22 April 1969, is male, registered as Florida Democratic Party, residing at 1814 Wildcat Ave Se, Palm Bay, Florida 32909. Florida voter ID number 124521292. This is the most recent information, from the Florida voter list as of 31 March 2019. BERMUDEZ-HERNANDEZ, ALEX was born 13 August 1954, is male, registered as Republican Party of Florida, residing at 1800 Tampa Ave, Clewiston, Florida 33440. Florida voter ID number 109500414. His telephone number is 1-305-496-7095. This is the most recent information, from the Florida voter list as of 31 March 2019. 29 February 2016 voter list: Alex Bermudez, 3114 W 70Th Ter, Hialeah, FL 33018 Republican Party of Florida. Bermudez-Hernandez, Elizabeth was born 8 August 1963, is female, registered as No Party Affiliation, residing at 8950 Sw 69Th Ct, Apt 201, Pinecrest, Florida 33156. Florida voter ID number 121419736. This is the most recent information, from the Florida voter list as of 31 March 2019. BERMUDEZ HERNANDEZ, FELIX RAFAEL was born 12 October 1962, is male, registered as Florida Democratic Party, residing at 1014 Laurel Hills Ct, Haines City, Florida 33844. Florida voter ID number 117721691. The voter lists a mailing address and probably prefers you use it: 1014 LAUREL HILLS CT HAINES CITY FL 33844 USA. This is the most recent information, from the Florida voter list as of 30 November 2014. Bermudez Hernandez, Frances Maria born 29 May 1980, Florida voter ID number 117012688 See Garcia, Frances Maria. CLICK HERE. Bermudez Hernandez, Frankie was born 10 May 1971, is male, registered as Republican Party of Florida, residing at 2503 51St Avenue Ter W, Bradenton, Florida 34207-2340. Florida voter ID number 118017334. This is the most recent information, from the Florida voter list as of 31 March 2019. 30 September 2016 voter list: Frankie Bermudez, 217 60th Avenue DR W, Bradenton, FL 342076042 Republican Party of Florida. 31 August 2016 voter list: FRANKIE BERMUDEZ HERNANDEZ, 1300 LOMA LINDA CT, SARASOTA, FL 34239 Republican Party of Florida. 22 October 2014 voter list: Frankie Bermudez Hernandez, 2513 51st Avenue TER W, Bradenton, FL 34207 Republican Party of Florida. 31 May 2012 voter list: Frankie Bermudez Hernandez, 235 Drawbridge LN, Valrico, FL 33594 Republican Party of Florida. Bermudez Hernandez, Gabriela was born 13 July 1996, is female, registered as No Party Affiliation, residing at 9218 Sw 148Th Ct, Miami, Florida 33196. Florida voter ID number 125446923. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Hernandez, Isabel was born 10 July 1950, is female, registered as Republican Party of Florida, residing at 531 Peace Dr, Kissimmee, Florida 34759. Florida voter ID number 116935706. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 July 2014 voter list: ISABEL BERMUDEZ HERNANDEZ, 423 PEACE CT, KISSIMMEE, FL 34759 Republican Party of Florida. 31 May 2012 voter list: ISABEL BERMUDEZ HERNANDEZ, 13 SAWFISH CT, KISSIMMEE, FL 34759 Republican Party of Florida. Bermudez Hernandez, Oscar was born 22 September 1963, is male, registered as Florida Democratic Party, residing at 4601 Lori Christine St, Haines City, Florida 33844. Florida voter ID number 113652758. His telephone number is 1-863-557-0996. This is the most recent information, from the Florida voter list as of 31 March 2019. 30 April 2018 voter list: Oscar Bermudez, 4601 Lori Christine St, Haines City, FL 33844 Florida Democratic Party. Bermudez Herrera, Patricia was born 22 June 1964, is female, registered as Florida Democratic Party, residing at 5731 Nw 114Th Path, #110, Doral, Florida 33178. Florida voter ID number 126211030. Her email address is PATICA2000@HOTMAIL.COM. This is the most recent information, from the Florida voter list as of 31 March 2019. BERMUDEZ HINTON, ADRIANA MARIA was born 8 August 1984, is female, registered as Florida Democratic Party, residing at 1897 Donahue Dr, Ocoee, Florida 34761. Florida voter ID number 121969079. Her telephone number is 1-702-533-6919. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 July 2018 voter list: Adriana Maria Bermudez-Hinton, 1984 Amhurst Ct, Navarre, FL 32566 Florida Democratic Party. 31 January 2017 voter list: Adriana Maria Bermudez, 37 Mayo ST, Hurlburt Field, FL 325441054 Florida Democratic Party. 31 October 2016 voter list: Adriana Maria Bermudez-Hinton, 413 Mulcahy Cir, Niceville, FL 325781762 Florida Democratic Party. Bermudez-Hinton, Adriana Maria born 8 August 1984, Florida voter ID number 121969079 See BERMUDEZ HINTON, ADRIANA MARIA. CLICK HERE. Bermudez Hormaza, Andres Alexander A. born 10 August 1975, Florida voter ID number 120729866 See Bermudez, Andres Alexander. CLICK HERE. Bermudez Huertas, Juan Santos was born 16 November 1984, is male, registered as No Party Affiliation, residing at 429 Bonnieview Dr, Valrico, Florida 33594. Florida voter ID number 125196154. This is the most recent information, from the Florida voter list as of 31 March 2019. BERMUDEZ-INOSTROZA, JUAN MANUEL was born 8 February 1949, is male, registered as Florida Democratic Party, residing at 2551 Sw Harbor Hills Rd, Dunnellon, Florida 34431. Florida voter ID number 119268116. His telephone number is 1-407-936-4429. This is the most recent information, from the Florida voter list as of 31 March 2019. 30 November 2015 voter list: Juan Manuel Bermudez Inostroza, 3190 Queen Alexandria Dr, Kissimmee, FL 34744 Florida Democratic Party. Bermudez Inostroza, Juan Manuel born 8 February 1949, Florida voter ID number 119268116 See BERMUDEZ-INOSTROZA, JUAN MANUEL. CLICK HERE. BERMUDEZ-INOSTROZA, RAFAEL ANGEL was born 15 September 1962, is male, registered as No Party Affiliation, residing at 2551 Sw Harbor Hills Rd, Dunnellon, Florida 34431. Florida voter ID number 126488771. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Jaramillo, Luis Alejandro was born 15 November 1991, registered as No Party Affiliation, residing at 7657 Bristol Cir, Naples, Florida 34120. Florida voter ID number 119254150. This is the most recent information, from the Florida voter list as of 30 November 2018. BERMUDEZ JR, DAVID ALBERTO born 8 October 1995, Florida voter ID number 122855222 See BERMUDEZ, DAVID ALBERTO. CLICK HERE. Bermudez Juliett, Ana Maria was born 18 July 1972, is female, registered as No Party Affiliation, residing at 15811 Collins Ave, Apt 2605, Sunny Isles Beach, Florida 33160. Florida voter ID number 117809429. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 January 2018 voter list: Ana Maria Bermudez, 15811 Collins AVE, APT 2605, Sunny Isles Beach, FL 331604188 No Party Affiliation. 30 June 2017 voter list: Ana Maria Bermudez, 2113 S Red RD, Coral Gables, FL 33155 No Party Affiliation. 30 November 2015 voter list: Ana M. Bermudez, 2113 S Red Rd, Coral Gables, FL 33155 No Party Affiliation. 31 August 2015 voter list: Ana M. Bermudez, 2967 SW 1St AVE, Miami, FL 33129 No Party Affiliation. 31 March 2015 voter list: Ana M. Bermudez, 1301 Milan Ave, APT 5, Coral Gables, FL 33134 No Party Affiliation. 22 October 2014 voter list: Ana M. Bermudez, 1749 NE Miami CT, APT 416, Miami, FL 33132 No Party Affiliation. Bermudez Jusino, Luis Antonio was born 13 August 1996, is male, registered as Florida Democratic Party, residing at 2704 N 10Th St, Tampa, Florida 33605-2504. Florida voter ID number 122122252. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Labrador, Emily Marie was born 8 February 1995, is female, registered as Florida Democratic Party, residing at 12111 Woodglen Cir, Clermont, Florida 34711. Florida voter ID number 125014044. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Lacayo, Rosalina was born 19 July 1955, registered as Florida Democratic Party, residing at 10050 Sw 50Th Ter, Miami, Florida 33165. Florida voter ID number 124330419. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 October 2016 voter list: Rosalina Bermudez, 10050 SW 50Th TER, Miami, FL 33165 Florida Democratic Party. Bermudez Lantigua, Marisol was born 30 May 1973, is female, registered as No Party Affiliation, residing at 12193 Sw 10Th St, Apt 5, Miami, Florida 33184. Florida voter ID number 125889625. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Laureano, Ivan Jassiel was born 25 May 2000, is male, registered as No Party Affiliation, residing at 321 Shady Oak Ave, Lake Wales, Florida 33898. Florida voter ID number 123715857. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Laureano, Julie M. was born 30 January 1968, is female, registered as Florida Democratic Party, residing at 321 Shady Oak Ave, Lake Wales, Florida 33898. Florida voter ID number 113735628. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Lebron, Abigail was born 28 April 1958, is female, registered as Florida Democratic Party, residing at 933 Country Cir, Apt E, Kissimmee, Florida 34744. Florida voter ID number 116662391. The voter lists a mailing address and probably prefers you use it: PO BOX 423131 Kissimmee FL 34742-3131. This is the most recent information, from the Florida voter list as of 30 April 2018. 31 March 2016 voter list: Abigail Bermudez, 933 Country Cir, apt E, Kissimmee, FL 34744 Florida Democratic Party. 31 March 2015 voter list: Abigail Bermudez, 4110 Arrow Ridge Pl, #207, Kissimmee, FL 34741 Florida Democratic Party. 31 May 2012 voter list: Abigail Bermudez Lebron, 2407 Stoney WAY, APT F, Kissimmee, FL 34744 Florida Democratic Party. Bermudez Lopez, Carmen D. was born 28 June 1943, is female, registered as Florida Democratic Party, residing at 187 Crestwood Pass, Davenport, Florida 33897. Florida voter ID number 116207595. Her telephone number is 1-863-424-5327. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 May 2012 voter list: CARMEN BERMUDEZ, 187 CRESTWOOD PASS, DAVENPORT, FL 33897 Florida Democratic Party. BERMUDEZ LOPEZ, CAWILMARY was born 25 December 1979, is female, registered as No Party Affiliation, residing at 14410 Island Cove Dr, Orlando, Florida 32824. Florida voter ID number 122636585. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez-Lopez, David Gabriel was born 22 April 1983, is male, registered as No Party Affiliation, residing at 11078 Spring Point Cir, Riverview, Florida 33579. Florida voter ID number 118130034. This is the most recent information, from the Florida voter list as of 31 March 2019. 30 September 2018 voter list: David Gabriel Bermudez-Lopez, 3127 SUMMER HOUSE DR, Valrico, FL 33594 No Party Affiliation. 31 May 2012 voter list: David Gabriel Bermudez-Lopez, 707 Chilt DR, Brandon, FL 33510 No Party Affiliation. Bermudez Lopez, Deandrea Monique born 4 August 1982, Florida voter ID number 118301633 See Lopez, Deandrea Monique. CLICK HERE. BERMUDEZ LOPEZ, EMILIO was born 9 February 1933, registered as No Party Affiliation, residing at 1507 Club Cir, Lakeshore, Florida 33854. Florida voter ID number 113832910. The voter lists a mailing address and probably prefers you use it: PO BOX 8527 LAKESHORE FL 33854 USA. This is the most recent information, from the Florida voter list as of 31 May 2012. Bermudez Lopez, Hector Javier was born 6 February 1988, is male, registered as Republican Party of Florida, residing at 2357 Deer Creek Blvd, St. Cloud, Florida 34772. Florida voter ID number 116288645. This is the most recent information, from the Florida voter list as of 31 March 2019. 28 February 2017 voter list: Hector Javier Bermudez Lopez, 821 Florida Pkwy, Kissimmee, FL 347439606 Republican Party of Florida. Bermudez Lopez, Maria V. was born 15 May 1963, is female, registered as No Party Affiliation, residing at 1343 W 43Rd Pl, Hialeah, Florida 33012. Florida voter ID number 120290940. This is the most recent information, from the Florida voter list as of 31 March 2019. 30 June 2016 voter list: Maria V. Bermudez, 9807 W Okeechobee RD, 110, Hialeah Gardens, FL 33016 No Party Affiliation. Bermudez Lopez, Nydia Esther was born 18 February 1945, is female, registered as Florida Democratic Party, residing at 208 Mante Dr, Kissimmee, Florida 34743. Florida voter ID number 106270718. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 January 2019 voter list: Nydia E. Bermudez Lopez, 1385 W Donegan AVE, APT F, Kissimmee, FL 34741 Florida Democratic Party. 29 February 2016 voter list: Nydia Esther Bermudez Lopez, 1385 W Donegan Ave, APT F, Kissimmee, FL 34741 Florida Democratic Party. 31 March 2014 voter list: Nydia Esther Bermudez Lopez, 11470 NW 56Th Dr, APT 104, Coral Springs, FL 33076 Florida Democratic Party. Bermudez-Lopez, Orlando was born 3 January 1940, is male, registered as Republican Party of Florida, residing at 12401 W Okeechobee Rd, #279, Hialeah Gardens, Florida 33018. Florida voter ID number 116112604. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Lopez, Pamela Karina was born 22 March 1980, is female, registered as Florida Democratic Party, residing at 18140 Nw 68Th Ave, 105, Hialeah, Florida 33015. Florida voter ID number 121895349. Her email address is pberm013@Fiu.edu. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Lopez, Tomas was born 1 May 1974, is male, registered as No Party Affiliation, residing at 5267 Images Cir, Apt 103, Kissimmee, Florida 34746. Florida voter ID number 123528123. This is the most recent information, from the Florida voter list as of 31 March 2019. 30 June 2018 voter list: TOMAS BERMUDEZ LOPEZ, 1849 S KIRKMAN RD, APT 1127, ORLANDO, FL 32811 No Party Affiliation. 31 May 2016 voter list: TOMAS BERMUDEZ LOPEZ, 1881 S KIRKMAN RD, APT 727, ORLANDO, FL 32811 No Party Affiliation. Bermudez Lozada, Lester Alexis was born 26 August 1994, is male, registered as Florida Democratic Party, residing at 2711 44Th St Sw, Lehigh Acres, Florida 33976. Florida voter ID number 122460790. This is the most recent information, from the Florida voter list as of 31 July 2017. BERMUDEZ LOZANO, JAFFET was born 16 June 1982, is male, registered as No Party Affiliation, residing at 324 Pearl St, Lake Wales, Florida 33853. Florida voter ID number 122944642. The voter lists a mailing address and probably prefers you use it: 324 1/2 PEARL STREET LAKE WALES FL 33853-0000. This is the most recent information, from the Florida voter list as of 31 December 2018. 31 March 2016 voter list: Jaffet Bermudez Lozano, 10308 FOREST HILLS DR, Tampa, FL 336127319 No Party Affiliation. Bermudez Luciano, Jose Antonio was born 27 February 1950, is male, registered as No Party Affiliation, residing at 5916 Windsong Oak Dr, Leesburg, Florida 34748. Florida voter ID number 120148036. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Luciano, Luz Selenia was born 23 October 1949, is female, registered as Florida Democratic Party, residing at 203 E Holly Dr, Orange City, Florida 32763-7512. Florida voter ID number 108705484. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Luna, Esperanza Lee was born 19 August 1980, is female, registered as No Party Affiliation, residing at 12101 N Dale Mabry Hwy, Apt 909, Tampa, Florida 33618. Florida voter ID number 122631157. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Luna, Rosidel was born 6 April 1988, is male, registered as Florida Democratic Party, residing at 12101 N Dale Mabry Hwy, Apt 909, Tampa, Florida 33618. Florida voter ID number 122941116. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Maldonado, Cristina was born 1 February 1994, is female, registered as Republican Party of Florida, residing at 821 Balmoral Dr, Davenport, Florida 33896. Florida voter ID number 125023593. This is the most recent information, from the Florida voter list as of 31 March 2019. BERMUDEZ MANGUAL, JOSE EDUARDO was born 12 May 1965, is male, registered as Florida Democratic Party, residing at 2780 Dueby St, Sarasota, Florida 34231. Florida voter ID number 125216568. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 July 2018 voter list: JOSE EDUARDO BERMUDEZ MANGUAL, 3217 BAILEY ST, SARASOTA, FL 34237 Florida Democratic Party. BERMUDEZ MARRERO, JESENIA NICOLE was born 28 February 1995, is female, registered as No Party Affiliation, residing at 7573 Thelma Way, Orlando, Florida 32822. Florida voter ID number 121338884. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 August 2016 voter list: JESENIA NICOLE BERMUDEZ, 7573 THELMA WAY, ORLANDO, FL 32822 No Party Affiliation. Bermudez-Marrero, Jesse James was born 30 August 1999, is male, registered as Florida Democratic Party, residing at 2912 Foraker Way, Kissimmee, Florida 34758. Florida voter ID number 125439652. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 October 2018 voter list: Jesse Bermudez, 2912 Foraker WAY, Kissimmee, FL 34758 Florida Democratic Party. Bermudez Marrero, Jose Miguel was born 12 October 1982, is male, registered as No Party Affiliation, residing at 4606 Devon Ave, Lakeland, Florida 33813. Florida voter ID number 124613002. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Martin, Victoria Del Carmen was born 23 March 1933, registered as Florida Democratic Party, residing at 4207 Hollowtrail Dr, Tampa, Florida 33624. Florida voter ID number 126497204. This is the most recent information, from the Florida voter list as of 31 March 2019. BERMUDEZ MARTINEZ, ERIC was born 2 August 1976, is male, registered as No Party Affiliation, residing at 1305 Hawkes Ave, Orlando, Florida 32809. Florida voter ID number 112846479. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 May 2012 voter list: ERIC BERMUDEZ MARTINEZ, 14907 DAY LILY CT, ORLANDO, FL 32824 No Party Affiliation. Bermudez Martinez, Frances M. born 2 May 1982, Florida voter ID number 119635562 See Bermudez, Frances M. CLICK HERE. Bermudez Martinez, Karyvette was born 4 August 1980, is female, registered as No Party Affiliation, residing at 10738 Great Falls Ln, Tampa, Florida 33647. Florida voter ID number 121793112. This is the most recent information, from the Florida voter list as of 31 March 2019. 30 September 2014 voter list: Karyvette Bermudez Martinez, 4307 E CITRUS CIR, Tampa, FL 336175901 No Party Affiliation. BERMUDEZ MARTINEZ, KEYLA IRELIS was born 10 January 1983, is female, registered as No Party Affiliation, residing at 705 Ibsen Ave, Orlando, Florida 32809. Florida voter ID number 112846486. The voter lists a mailing address and probably prefers you use it: PO BOX 592372 ORLANDO FL 32859. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 August 2015 voter list: KEYLA I. BERMUDEZ, 8350 OAK BLUFF DR, ORLANDO, FL 32827 No Party Affiliation. Bermudez Martinez, Leonor was born 14 August 1936, is female, registered as Republican Party of Florida, residing at 3100 Coral Springs Dr, Apt 1 A, Coral Springs, Florida 33065. Florida voter ID number 115695172. This is the most recent information, from the Florida voter list as of 31 October 2015. BERMUDEZ MARTINEZ, THALIA ROXANE born 22 March 1997, Florida voter ID number 122428200 See BERMUDEZ, THALIA ROXANE. CLICK HERE. Bermudez Medina, Daryl was born 7 July 1990, is male, registered as No Party Affiliation, residing at 1111 Doncaster Ct, Kissimmee, Florida 34758-3061. Florida voter ID number 121048123. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Medina, David was born 12 June 1979, is male, registered as No Party Affiliation, residing at 1097 Brenton Manor Dr, Winter Haven, Florida 33881. Florida voter ID number 113033615. His telephone number is 1-407-486-0214. His email address is D1979DABO@YAHOO.COM. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 January 2016 voter list: David Bermudez, 1111 Doncaster Ct, Kissimmee, FL 347583061 No Party Affiliation. Bermudez Medina, Denney was born 9 February 1981, is male, registered as No Party Affiliation, residing at 3226 Cleopatra Ct, St. Cloud, Florida 34771. Florida voter ID number 114941390. His telephone number is 1-407-508-2915. This is the most recent information, from the Florida voter list as of 31 March 2019. 31 December 2018 voter list: Denney Bermudez Medina, 1525 Eola CIR, Kissimmee, FL 34741 No Party Affiliation. 31 March 2014 voter list: Denney Bermudez Medina, 618 Mabbette ST, #3, Kissimmee, FL 34741 No Party Affiliation. 31 May 2012 voter list: Denney Bermudez, 1111 Doncaster Ct, Kissimmee, FL 34758 No Party Affiliation. Bermudez Medina, Joaquin Eduardo was born 10 April 1967, is male, registered as Florida Democratic Party, residing at 2578 Jasmine Trace Dr, Kissimmee, Florida 34758. Florida voter ID number 125356962. This is the most recent information, from the Florida voter list as of 31 March 2019. Bermudez Medina, Raul Gabriel was born 28 August 1984, is male, registered as No Party Affiliation, residing at 2775 Woodland Creek Loop, Kissimmee, Florida 34744. Florida voter ID number 115152009. This is the most recent information, from the Florida voter list as of 31 May 2012.
2019-04-22T12:02:29Z
https://flvoters.com/pages/b105628.html
2005-08-26 Assigned to XEROX CORPORATION reassignment XEROX CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LOFTHUS, ROBERT M., LYSY, DUSAN G., MOORE, STEVEN R., RADULSKI, CHARLES A., ANDERSON, DAVID G. A printing system includes a monochrome marking engine for printing monochrome images and a color marking engine which can print both color and monochrome images. A previewer identifies attributes of the print job, for example, for each page, identifying any monochrome and color images. A user interface enables a user to select a print mode for the print job from a plurality of print modes. A scheduler is responsive to the previewer and the user interface for assigning pages of the print job among the marking engines based on the attributes of the print job and the user-selected print mode. A marking engine controller is in communication with the scheduler for controlling the at least one monochrome marking engine to render pages of the print job assigned thereto and for controlling the at least one color marking engine to render pages of the print job assigned thereto. This application claims the priority of U.S. Provisional Application Ser. No. 60/631,651 (Attorney Docket No. 20031830-US-PSP), filed Nov. 30, 2004, entitled “TIGHTLY INTEGRATED PARALLEL PRINTING ARCHITECTURE MAKING USE OF COMBINED COLOR AND MONOCHROME ENGINES,” by David G. Anderson, et al., which is incorporated herein in its entirety, by reference. The present exemplary embodiment relates generally to a printing system containing at least a first marking engine and a second marking engine and more particularly concerns a printing system comprising a monochrome marking engine and a color marking engine with an integral paper path which enables user selection from a plurality of print modes, each print mode according different weights to achieving certain goals, such as run cost, productivity, and image quality, in the printing of a print job. In a typical xerographic marking device, such as a copier or printer, a photoconductive insulating member is charged to a uniform potential and thereafter exposed to a light image of an original document to be reproduced. The exposure discharges the photoconductive insulating surface in exposed or background areas and creates an electrostatic latent image on the member, which corresponds to the image areas contained within the document. Subsequently, the electrostatic latent image on the photoconductive insulating surface is made visible by developing the image with a developing material. Generally, the developing material comprises toner particles adhering triboelectrically to carrier granules. The developed image is subsequently transferred to a print medium, such as a sheet of paper. The fusing of the toner onto paper is generally accomplished by applying heat to the toner with a heated roller and application of pressure. In multi-color printing, successive latent images corresponding to different colors are recorded on the photoconductive surface and developed with toner of a complementary color. The single color toner images are successively transferred to the copy paper to create a multi-layered toner image on the paper. The multi-layered toner image is permanently affixed to the copy paper in the fusing process. A common trend in the office equipment market, particularly in relation to copiers and printers, is to organize a machine on a modular basis, wherein certain distinct subsystems of the machine are bundled together into modules which can be readily removed from the machine and replaced with new modules of the same type. A modular design facilitates servicing and repair, since a representative of the service provider simply removes the defective module. Actual repair of the module can take place off site, at the service provider's premises. Recently, printing systems have been developed which include a plurality of marking engine modules. These systems enable high overall outputs to be achieved by printing portions of the same document on multiple printers. Such systems are commonly referred to as “tandem engine” printers, “parallel” printers, or “cluster printing” (in which an electronic print job may be split up for distributed higher productivity printing by different marking engines, such as separate printing of the color and monochrome pages). In such machines, color marking engines which print with cyan, magenta, and yellow (CMY) as well as black (K) toners allow printing of both color and black images on a single marking engine. However, the cost of producing black prints on a color marking engine is often higher than for a dedicated monochrome device. One reason for this is that the color components are often cycled, even during black printing. Although in some systems, the color components can be disabled during the production of monochrome prints, this tends to increase mechanical complexity to provide for retraction of the color components and to disengage their drives. Another reason for the higher cost is that the marking engine may provide a certain interdocument color toner throughput to control toner age in the system. Another source of increased cost is that the black toner in the CMYK marking engines is generally made xerographically compatible with the C, M & Y toners, which often makes the toner formulation more complex and thus more expensive than that required for a monochrome marking engine. U.S. application Ser. No. 11/168,152 (Attorney Docket 20020324-US-NP), filed Jun. 28, 2005, entitled “ADDRESSABLE IRRADIATION OF IMAGES,” by Kristine A. German, et al. Aspects of the present disclosure in embodiments thereof include a printing system and a method of printing. In one aspect, the printing system includes at least one monochrome marking engine for printing monochrome images, at least one color marking engine, and a previewer which identifies attributes of a print job comprising a plurality of pages including, for each page of the print job, identifying whether the page includes a monochrome image and identifying whether the page includes a color image. A user interface enables a user to select a print mode for the print job from a plurality of user-selectable print modes. A scheduler is responsive to the previewer and the user interface, for assigning pages of the print job among the marking engines based on the attributes of the print job and the user-selected print mode. At least one marking engine controller is in communication with the scheduler, for controlling the at least one monochrome marking engine to render pages of the print job assigned thereto and for controlling the at least one color marking engine to render pages of the print job assigned thereto. In another aspect, for a print job having a plurality of pages, in a printing system including at least one monochrome marking engine for printing monochrome images, at least one color marking engine, operatively connected to the monochrome marking engine, a method for printing includes identifying attributes of the print job including, for each page, identifying if the page includes a monochrome image and identifying if the page includes a color image. The method further includes establishing a print mode for the print job from a plurality print modes, the plurality of print modes including at least one user-selectable print mode, assigning pages of the print job among the marking engines based on the attributes of the print job and the user-selected print mode, and controlling the at least one monochrome marking engine to render pages of the print job assigned thereto and for controlling the at least one color marking engine to render pages of the print job assigned thereto. In another aspect, a xerographic printing system includes at least a first marking engine which prints images of a first type but which does not print images of a second type and at least a second marking engine which prints images of the first type and images of the second type. A user interface enables a user to select a print mode from a plurality of print modes including a first print mode in which all pages of the print job which have an image of the first type are assigned to the first marking engine for printing the image of the first type and in which pages of the print job having an image of the first type and also an image of the second type, are also assigned to the second marking engine for printing images of the second type and a second print mode in which at least a portion of the pages having only images of the first type are assigned to the at least one second marking engine. The printing system executes the print job according to the print mode selected. FIG. 3 is a sectional view of the exemplary printing system of FIG. 2, incorporating a plurality of marking engines of the type illustrated in FIG. 1. Aspects of the exemplary embodiment relate to a printing system and a method of printing. The printing exemplary system is configured for printing an electronic print job having a plurality of pages and includes at least one monochrome marking engine and at least one color marking engine. The monochrome marking engine prints monochrome images by using a single colorant, such as black for black images, or a custom color colorant for custom color images. Images, as used herein may include text, graphics, photographs and the like. A page may include both monochrome and color images, either spaced from each other or overprinted one over the other. The color (P) marking engine has the capability for using more than one colorant, such as cyan, magenta, yellow and optionally also black (CMYK) toners or inks, and may be sometimes be referred to as a process color marking engine. The color marking engine is thus capable of printing both color images, by combinations of colorants, as well as images which would be printed in monochrome if printed by the monochrome marking engine, which will be referred to herein as monochrome images, even though the color printer may use more than one colorant to print the monochrome images. For example a black (K) image is printed with a black colorant when printed on the monochrome marking engine and generally also with a black colorant (K) when printed on a CMYK color marking engine (although it could also be printed with a combination of cyan, magenta, and yellow which approximates black), while a custom color (C) image which can be printed by the monochrome marking engine with a single colorant may use two or more colorants when printed on the color marking engine. Color images are those which can be printed on the color marking engine but not on the monochrome engine. The color marking engine may have a lower productivity (throughput) than the monochrome marking engine. For example, the color making engine may have a maximum productivity of about 50 prints per minute (ppm) whereas the monochrome marking engine may have a maximum productivity of over 100 ppm. Monochrome marking engines, such as black and custom color marking engines, may be fed with a dyed or pigmented ink or toner, or a premixed ink or toner, which provides a specific color, generally with a higher color rendering accuracy than can be achieved with a process color marking engine. Custom color (C) here is used interchangeably with other terms in the trade, such as signature color, highlight color, or Pantone™ color. While the monochrome marking engine will be described with general reference to a black (K) marking engine it will be appreciated that other monochrome marking engines, such as those which print in custom color (C), are also contemplated. The color marking engine may be operatively connected to the monochrome marking engine via a common marking engine control system and be arranged in an integrated parallel printing architecture therewith whereby portions of the print job may be performed by different marking engines and then brought together in a common stream. Each page of the print job may include an image or a set of images which will appear on the same side of a sheet of print media in the executed print job. The identification of images to be associated with a particular page may be determined by a previewer, which previews the print job for attributes, such as job level attributes, page attributes, and image attributes. This may include for each page, identifying whether the page includes a monochrome image and whether the page includes a color image and, by inference, whether the page includes both monochrome and color images. A user interface enables a user to select a mode of printing from a plurality of different print modes for executing the print job. The control system may include a scheduler, which is responsive to the previewer and the user interface. The scheduler may assign portions of the print job among the marking engines based on the attributes of the print job and the user-selected printing mode. Each of the printing modes applies a different constraint or different set of constraints. The control system may also include a paper path controller, in communication with the scheduler, which controls the monochrome marking engine to render the portions of the print job assigned thereto and controls the color marking engine to render the portions of the print job assigned thereto. Print medium generally includes a usually flimsy physical sheet of paper, plastic, or other suitable physical print media substrate for images, whether precut or web fed. An electronic print job is normally a set of related images from a particular user, or otherwise related which, when executed by the printing system, form a physical document, such as one or more collated copy sets copied from a set of original print job sheets or electronic document page images. Print job execution involves printing images on front, back, or front and back sides of one or more sheets of paper or other print media. Some sheets may be left completely blank. Some sheets may have both color and black images. Execution of the print job may also involve collating the sheets in a certain order. Still further, the print job may include folding, stapling, punching holes into, or otherwise physically manipulating or binding the sheets. c) improving the consistency of image appearance (such as gloss or color) between images (a quality goal), which may be achieved by using a marking engine or engines which are capable of providing consistency between images, often by using a single color marking engine to print all of the color images. FIG. 1 is a simplified partially-elevational, partially-schematic view of a marking engine 1. The marking engine 1 may serve as a replaceable xerographic module for a printing system 10, such as a xerographic printing system 10, of the type shown in FIGS. 2 and 3. While FIG. 3 illustrates a combination digital copier/printer, the printing system may alternatively be a copier or printer that outputs prints in whatever manner, such as a digital printer, facsimile, or multifunction device, and can create images electrostatographically, by ink-jet, hot-melt, or by any other method. The marking media used by the marking engine can include toner particles, solid or liquid inks, or the like. The printing system may incorporate “tandem engine” printers, “parallel” printers, “cluster printing,” “output merger,” or “interposer” systems, and the like, as disclosed, for example, in U.S. Pat. Nos. 4,579,446; 4,587,532; 5,489,969 5,568,246; 5,570,172; 5,596,416; 5,995,721; 6,554,276, 6,654,136; 6,607,320, and in copending U.S. application Ser. No. 10/924,459, filed Aug. 23, 2004, for Parallel Printing Architecture Using Image Marking Engine Modules by Mandel, et al., and application Ser. No. 10/917,768, filed Aug. 13, 2004, for Parallel Printing Architecture Consisting of Containerized Image Marking Engines and Media feeder Modules, by Robert Lofthus, the disclosures of all of these references being incorporated herein by reference. A parallel printing system typically feeds paper from a common paper stream to a plurality of printers, which may be horizontally and/or vertically stacked. Printed media from the various printers is then taken from the printer to a finisher where the sheets associated with a single print job are assembled. Variable vertical level, rather than horizontal, input and output sheet path interface connections may be employed, as disclosed, for example, in U.S. Pat. No. 5,326,093 to Sollitt. The marking engine 1 includes many of the hardware elements employed in the creation of desired images by electrophotographical processes. In the case of a xerographic device, the marking engine typically includes a charge retentive surface, such as a rotating photoreceptor 12 in the form of a belt or drum. The images are created on a surface of the photoreceptor. Disposed at various points around the circumference of the photoreceptor 12 are xerographic subsystems which include a cleaning device generally indicated as 14, a charging station for each of the colors to be applied (one in the case of a monochrome printer, four in the case of a CMYK printer), such as a charging corotron 16, an exposure station 18, which forms a latent image on the photoreceptor such as a Raster Output Scanner (ROS), a developer unit 20, associated with each charging station for developing the latent image formed on the surface of the photoreceptor by applying a toner to obtain a toner image, a transferring unit, such as a transfer corotron 22 transfers the toner image thus formed to the surface of a print media substrate, such as a sheet of paper, and a fuser 24, which fuses the image to the sheet. The fuser generally applies at least one of heat and pressure to the sheet to physically attach the toner and optionally to provide a level of gloss to the printed media. In any particular embodiment of an electrophotographic marking engine, there may be variations on this general outline, such as additional corotrons, cleaning devices, or, in the case of a color printer, multiple developer units. The xerographic subsystems 14, 16, 18, 20, 22, and 24 are controlled by a marking engine controller 26 such as a CPU associated with actuators for each of the subsystems. While the marking engine controller 26 is illustrated as a single unit, it is to be appreciated that the actuators may be distributed throughout the marking engine, for example, located in the xerographic subsystems. The marking engine controller 26 may adjust various xerographic parameters for example Developed Mass Area (DMA), transfer currents, and fuser temperature to produce high quality prints. The marking engine controller 26 may be also linked to other known components, such as a memory, a marking cartridge platform, a marking driver, a function switch, a self-diagnostic unit, all of which can be interconnected by a data/control bus. With reference to FIG. 2 the printing system 10 includes a plurality of marking engines 100, 102, 104, 106, which may be configured as for the marking engine 1 shown in FIG. 1. The various marking engines are associated for integrated parallel printing of documents within the printing system 10 and are under the control of a common control system 110, which may be located in a suitable central processor, such as a CPU. It will be appreciated that various parts of the control system 110 may be distributed, for example, located in the marking engines, and connected with the central processor by suitable links. The central control system 110 may communicate with the marking engine controllers 26 in order to effectuate a print job. While in the embodiment illustrated in FIG. 2, each marking engine has its own marking engine controller 26, it is to be appreciated that two or more marking engines in the printing system may have a common marking engine controller. As shown schematically in FIG. 2, each marking engine can receive image data, which can include pixels, in the form of digital image signals for processing from a source of image data, such as computer network/server 112 or scanner 113, by way of a suitable link or communication channel 114, which feeds the signal to an interface unit (IU) 116 of the printing system. While the illustrated interface unit 116 is part of the digital printing system, it is also contemplated that the computer network or the scanner may share or provide the function of converting the digital image data into a utilizable form. A conversion unit, which may be located in the interface unit 116, may convert the image data into an electronic form which is usable by the printing system. A previewer 204 receives information on the electronic print job from the interface unit 116 and, for a plurality of pages in the print job, identifies page attributes, including any images assigned to the page which require a color printer (color images) and any images which can be printed on either the monochrome engine or color marking engine (such as black images). It will be appreciated that there may be images which can only be printed on a dedicated marking engine which is capable of printing only those images, such as magnetic ink character recognition (MICR) images, although for simplicity, these will not be discussed herein. In the exemplary architecture of FIG. 3, the four marking engines 100, 102, 104, and 106 are shown interposed between a feeder module 120 and a finishing module 122. At least a first of the marking engines 100, 106 is a monochrome engine, such as black (K) and at least a second of the marking engines 102, 104 is a color (P) marking engine, capable of color printing as well as black (K). In the embodiment shown in FIG. 3, marking engines 100, 102, 104, and 106 are of the following print modalities, a black marking engine (K), two process color marking engines (P), and a custom color marking engine (C), respectively. As will be appreciated, two or more of the marking engines may be of the same print modality, such as two black (K) and two process color (P) marking engines. The marking engines 100, 102, 104, 106, are connected with each other and with the feeder module 120 and an output destination 122 by a conveyor system 124 including a network of paper pathways. The conveyor system 124 is controllable for directing print media to a monochrome marking engine 100 or to a color marking engine 102, 104 such that monochrome images can be applied either by a monochrome engine or by a color marking engine. In the illustrated embodiment, the conveyor system 124 is controllable for delivering print media from the feeder module 120 to any one of the marking engines and between any marking engine and any other marking engine in the system. Additionally, the conveyor system 124 enables print media to be printed by two or more of the marking engines contemporaneously. For example, K printing can be performed by monochrome marking engine 100 on a portion of a print job, while at the same time, K printing is performed by the process color marking engine 100 on another portion of the print job. The job output destination 122 can be any post printing destination where the printed pages of a document are brought together, ordered in a sequence in which they can be assembled into in the finished document, such as a finisher or a temporary holding location. The finisher can be any post-printing accessory device such as an inverter, reverter, sorter, mailbox, inserter, interposer, folder, stapler, collater, stitcher, binder, over-printer, envelope stuffer, postage machine, output tray, or the like. The conveyor system 124 includes a plurality of drive elements 125, illustrated as pairs of rollers, although other drive elements, such as airjets, spherical balls, and the like are also contemplated. The paper pathway network 124 may include at least one downstream print media highway 126, 128 (two in the illustrated embodiment), and at least one upstream print media highway 130, along which the print media is conveyed in a generally opposite direction to the downstream highways 126, 128 and which may be connected with the upstream highway(s) to form loops. The highways 126, 128, 130 are arranged generally horizontally, and in parallel in the illustrated embodiment, although it is also contemplated that portions of these highways may travel in other directions, including vertically. The main highways 126, 128, 130 are connected at ends thereof with each other, and with the feeder module 120 and finisher module 122, by cloverleaf connection pathways 132, 134. Pathways 140, 142, 144, 146, 148, 150, 152, 154 etc. feed the print media between the highways 126, 128, 130 and the marking engines 100, 102, 104, 106. The highways 126, 128, 130 and/or pathways 140, 142, 144, 146, 148, 150, 152, 154 may include inverters, reverters, interposers, bypass pathways, and the like as known in the art to direct the print media between the highway and a selected marking engine or between two marking engines. For example, as shown in FIG. 3, each marking engine has an input side inverter 160 and an output side inverter 162 connected with the respective input and output pathways. The network 124 is structured such that one or both the inverters 160, 162 can be bypassed, in the illustrated embodiment, by incorporation of bypass pathways 164 on the input and/or output sides respectively. As the document is being processed for image transfer through the marking engine 100, the document may be transported at a relatively slower speed, herein referred to as engine marking speed. However, when outside of the marking engine 100, the document can be transported through the interconnecting high speed highways at a relatively higher speed. In inverter assembly 160 a document exiting the highway 126 at a highway speed can be slowed down before entering marking engine 100 by decoupling the document at the inverter from the highway 126 and by receiving the document at one speed into the inverter assembly, adjusting the reversing process direction motor speed to the slower marking engine speed and then transporting the document at slower speed to the marking engine 100. Additionally, if a sheet has been printed in marking engine 100, it can exit the marking engine at the marking engine speed and can be received in the exit inverter assembly 162 at the marking engine speed, be decoupled from the marking engine and transported for re-entering the high speed highway 126 at the highway speed. Additionally, as noted above, any one of the inverter assemblies shown in any of the architectures could also be used to register the document in skew or in a lateral direction. Print media from the various marking engines and highways is collected as a common stream and delivered by a pathway 170 to the finisher 122. Thus, print media which has been marked by the at least one monochrome marking engine 100, 106, can be delivered to the same output destination as print media which has been marked by one of the at least one color marking engines 102, 104. The finisher 122 may include one or a plurality of output destinations, herein illustrated as output trays 172, 174. The feeder module 120 may include one or more print media sources, such as paper trays 176, 178, etc. While in the illustrated embodiment, all of the marking engines 100, 102, 104, 106 are fed from a common high speed feeder module 120, it is also contemplated that the marking engines may be associated with separate print media feeders. For example, each marking engine may have its own dedicated print media source or a group of marking engines may be associated with a print media source. The control system 110 includes a paper path controller 200 which is responsive to a scheduler 202. The paths in which print media documents are directed through the network 124 are controlled by the paper path controller 200, which controls the functions of paper handling. The scheduler 202, through accessing information on the capabilities of the marking engines, schedules an itinerary for a print job. The itinerary provides for the routing of print media to and from appropriate ones of the marking engines 100, 102, 104, and 106 by utilizing appropriate pathways of the conveyor system 124. In creating an itinerary, the scheduler receives and utilizes information about the print job to be printed from the job previewer 204, which may located along with the scheduler 202 and paper path controller 200 within the overall control system 110 for the printing system or elsewhere, such as in the network server or on a personal computer linked to the printing system. Prior to printing of a print job, which may be realized in the form of a document or plurality of documents, the job previewer 204 may determine overall print job level attributes and image attributes as well as the individual page attributes. The job level attributes may include the number of pages in the print job that have color images on them, the number of pages that have black images on them, and so forth. The page attributes, as discussed above, may include monochrome and color images for each page and may further include other types of images or no images. Where there is more than one type of monochrome image, such as black and custom color, these may be separately identified as page attributes. The image attributes may include color content, line screen frequency and type, and the like. The job previewer may include an algorithm for classifying for each page of the plurality of pages of the print job into a predefined color group selected from a set of color groups, depending on the page attributes. The color groups may include black only (K), where the page has only a black image, custom color only (C), where a page has only a color image, color only (P), where a page has only a color image, and one or more groups for pages having an image of more than one print modality, such as both black and color images or both custom color and black images. The scheduler 202 schedules the printing of a print job including selection of the marking engines to be used and the route of each sheet of the print job through the system. The scheduler 202 schedules print jobs based on various constraints. The constraints to be applied depend, at least in part, on the printing mode selected by the user. The scheduler 202 confirms with each of the system components, such as marking engines, inverters, etc. that they will be available to perform the desired function, such as printing, inversion, etc., at the designated future time, according to the proposed itinerary. Various methods of scheduling print media sheets may be employed. For example, U.S. Pat. No. 5,095,342 to Farrell, et al.; U.S. Pat. No. 5,159,395 to Farrell, et al.; U.S. Pat. No. 5,557,367 to Yang, et al.; U.S. Pat. No. 6,097,500 to Fromherz; and U.S. Pat. No. 6,618,167 to Shah; and above mentioned U.S. application Ser. Nos. 10/284,560; 10/284,561; and 10/424,322 to Fromherz, all of which are incorporated herein in their entireties by reference, disclose exemplary job schedulers which can be used to schedule the print sequence herein, with suitable modifications, such as by introducing constraints relating to the printing of monochrome pages. The exemplary printing system has at least three different printing modes of operation such as a productivity mode, a quality mode, and an economy mode, which favor the goals of productivity, quality, and economy, respectively, over the other two goals. In a given mode, the scheduler applies one or more constraints which may impact productivity, image consistency, and production cost, respectively. One of the modes available to the user may be a default mode which is selected automatically if another print mode is not selected. Productivity (productivity mode) may be expressed in terms of prints per minute (ppm) of the printing system or the time taken for a job or set of jobs to be completed. In general, the productivity can be increased by having more than one printer printing a portion of the job contemporaneously. For example, black printing may be printed on black pages while color images are being printed on a color marking engine. Where a significant proportion of the pages are black, productivity may be increased by splitting the black pages among two or more marking engines that are available and capable of performing the task, which may result in a portion of the black pages being printed on a color printer and another portion on a black printer. Image quality (quality mode) may be expressed in terms of the consistency between images, particularly those produced by different marking engines, which may be measured, for example, in terms of gloss of the images and/or color rendering. In general, image consistency is improved by having images of a particular print modality printed on the same or a consistent marking engine. Production cost (economy mode) may be expressed, for example, in terms of the cost of printing a page or printing a print job. Production cost is generally minimized by having black images printed on a black marking engine and color images printed on a color marking engine. In selecting a particular mode, a user accepts that other priorities, such as production costs and image quality, in the case of a selected productivity mode, may be sacrificed to some degree. When a user selects a particular mode, the planner scheduler receives the user selection as an input and applies one or more constraints which are applicable to the printing mode in planning and scheduling an itinerary. Thus, the itinerary planned for one printing mode may employ a different marking engine or engines from that which would be employed another printing mode, although in some cases, the different constraints may result in the same marking engine or engines being used for a given print job. For a black only print job, all pages are assigned to any marking engine capable of printing black. All pages with any color content are assigned to a color marking engine. All pages with any black content have the black content printed on a black marking engine. All pages are printed on the same color marking engine unless the print job includes only black pages. It will be appreciated that the modes may not be optimized solely for productivity, economy, or quality and that additional or different constraints may be applied depending on the mode selected. In addition to the constraints listed above for the optimized printing modes, some of the constraints which may be used, either singly or in combination in creating a printing mode may include those listed in TABLE 1. All black only pages are assigned to the same black marking engine. engine or to a consistent black marking engine. All pages are assigned to the same color marking engine. assigned to the same (or to a consistent) color marking engine. to the same color marking engine. to the same or to a consistent color marking engine. assigned to the same black marking engine. assigned to the same or to a consistent black marking engine. are assigned to the same or to a consistent marking engine. color marking engine (or to a consistent color marking engine). A consistent marking engine is one which achieves image characteristics, such as gloss and color space, which fall within a predetermined acceptable range of that of another consistent marking engine. The user may be provided with a set of print modes to select from, without requiring the user to have a detailed understanding of the various constraints that the scheduler will apply in realizing the print job. In one embodiment, the set of modes may include one or more optimized print modes for achieving the goals of productivity, quality, and/or economy, and/or may include one or more modified print modes which, while having a primary goal of productivity, quality, or economy, introduce constraints which take into consideration one of the other goals. For example, in a modified quality printing mode, which provides a compromise between image quality and productivity and/or cost, at least the facing images (e.g., those images appearing on facing pages of a finished document) which are of the same print modality are printed on the same (or a consistent) marking engine, since the eye is more apt to notice any differences between pages viewed at the same time. This allows, for example, non-facing color images to be printed on different color marking engines, where the difference is less noticeable or not-noticeable. In another embodiment, the user is provided with a set of base print modes, such as the optimized modes for productivity, quality, and economy, described above, and allowed to select one or more optional preferences from a set of preferences, such as one or more of the constraints in Table 1 above. Each mode may have its own set of associated user-selectable preferences. The scheduler applies the user-selected preferences to schedule an itinerary, applying the constraints of the base mode where these do not conflict with the additional user-selected preferences. In some cases some of the print modes may not be available for printing a particular print job. In this case, the user may be presented with a more limited set of print modes from which to select. In some cases, a plurality of the print modes may yield an identical itinerary for a particular print job. It will be appreciated that there may be composite images to be printed, for example, those which include both black portions (black images), such as text, as well as color portions (color images), such as photographs and color drawings. In a first of the print modes, these images may be printed on a single marking engine (a color marking engine). In a second of the print modes, they may be sequentially printed by two or more marking engines, such as a black marking engine for the black image and a color marking engine for the color image. The first mode of operation favors production cost while potentially sacrificing image quality since the imaged print media with a composite image may have a different gloss level from adjacent pages which do not include composite images and which are therefore fused once rather than twice. Additionally, the difficulties associated with registration of the imaged print media in the second marking engine can lead to the two portions of the image lacking optimal alignment. The second mode of operation generally favors image consistency while potentially sacrificing production cost. With the above understanding of the elements, the operation of the system will be readily understood and appreciated from the following description. The job previewer 204 in conjunction with the scheduler 202 operates to distribute one or more job portions of a print job among one or more marking engines based on the attributes of the print job. In general, the technique proposes an approach in which attribute information associated with a job, i.e. attribute information embedded in an electronic document and corresponding job ticket, is “parsed” and used. A print job is submitted to the computer network/server. The print job, i.e., the electronic document and job ticket associated with the job, is then parsed for information relating to job level attributes. As will be appreciated, parsing may include nothing more than scanning the job ticket and the electronic document (also referred to as “job master”) to glean necessary attribute information. In conjunction with parsing, the job may be placed into a form suitable for editing. It will be appreciated that a job, when in a PDL format, is not readily edited. Thus to facilitate editing, the job is placed into an intermediate format, e.g. such as TIFF or any other suitable editable format. It should be appreciated that the preferred embodiment contemplates the placing of the job into an intermediate format, whether the job is to be edited or not, because to do so, among other things, facilitates print-on-demand preparation of the job. RIPing of the job to place it into an intermediate format can be achieved readily with a platform of the type disclosed in U.S. Pat. Nos. 5,113,494 and 5,220,674, the disclosures of which are incorporated herein in their entireties, by reference. Once the job is in a suitable intermediate format, it may be buffered so that appropriate editing procedures can be executed therewith. As will be appreciated, in one example the intermediate format would permit editing at an object oriented level in which image components or objects could be added to or deleted from the document job. The job previewer 204 determines the job level and page attributes and assigns each page to an appropriate group. The scheduler 202 includes a subroutine which determines which of one or more modes of operation can be used to execute the job. A situation well suited for the present application exists when a print job includes images having a black image and one or more color images. In one example, the job may include multiple color types, such as both process color and accent/highlight color. In one example, a user is queried by the printing system or network server, in accordance with an interactive scheme, to select a print mode and the users selection is received by user interface 220. Where a scanner or other image source is used, a dedicated user interface 220, such as a keyboard, touch screen, or the like may be provided for inputting user-selected print modes and preferences. Alternatively, the user may input the print mode selection on a network computer or other device remote from the printing system and communicated to the user interface 220, for example, via the interface unit. The user may be asked to select one of the print modes which can be used for the print job. Where a user makes no selection, the printing system selects a default mode, which may be the same as one of the user selectable print modes or a different print mode. Constraint 1: For a black only print job (a print job with no color images), all pages can be assigned to any marking engine capable of printing black. Constraint 2: For a print job which includes color images, all pages with any color content (a color image) are assigned to a color marking engine. Constraint 1: All pages are printed on the same color marking engine unless the print job includes no pages with color (generally, black only pages). Constraint 2: Where the print job includes black only pages (no pages with color), all pages are printed on the same black engine. Same as mode A for simplex print jobs, where printing is on one side of a sheet only. Facing pages of a duplex print job that includes only black are printed on the same (or optionally a consistent) black marking engine. For mixed simplex print jobs (those with both color pages and black only pages), pages that include any color are printed on the same (or optionally a consistent) color marking engine. Pages that are black only are printed on the same (or optionally a consistent) black marking engine. For mixed duplex print jobs (those with both color pages and black only pages), both sides of a sheet are printed on the same (or optionally a consistent) color marking engine. All process color images are assigned to process color marking engines. All black images are assigned to black marking engines. All custom color images are assigned to custom color marking engines. All MICR impressions are assigned to MICR engines. (d) when the default setting is a fourth level, all color images of the print job are assigned to the color marking engine(s) and all black images of the print job are assigned to the monochrome marking engine(s). Thus, for example, when the user selected print mode or the default setting is the economy mode, the scheduler 202 may schedule the job to satisfy the specified cost efficiency constraints, without applying constraints relating to productivity and print quality, unless these constraints can be accommodated while achieving the constraints of the economy mode. It will be appreciated that in a productivity mode for a print job which includes both color and black pages, the color marking engine prints all the pages with any color content but may also print some of the black only pages, depending on the number of black pages to be printed and the availability and capacities of the black marking engine(s). These modes listed above may include further constraints in addition to those listed. Additionally or alternatively, there may be print modes which include other constraints. For example, if slightly higher image quality is desired in a productivity or economy mode, the scheduler may assign facing pages of the print job to the same marking engine. Or constraints may be provided to take into account temporary unavailability of one or more marking engines. Thus for example, in the event that a black marking engine goes off line, the offline constraint may modify the printing mode to allow black printing on a color marking engine, thus superseding the user selected constraint. In one embodiment, the user is notified that the printing system is unable to print the print job according to the selected mode and may abort the job or select another mode. In each of the above exemplary print modes, it is assumed that all images are either black images or color images. The blank pages (pages with neither black or color) are ignored. Thus, a print job with “black only” pages may also have blank pages. The modes may use similar constraints custom color images to those for black images. Constraints can be established which address situations where there are both black and custom color images on a page and where custom color and black marking engines are both present in the printing system. The printing system thus described allows user optimization of jobs for image quality, run cost, and productivity. For instance, if the user has a monochrome production job which needs to be produced as quickly as possible, the user may select a productivity mode for the job in which the black only prints are scheduled to both the process color and black marking engines. In this case the run cost per sheet and print to print image quality consistency may be sacrificed for productivity. For a very quality conscious color customer, a job can be run entirely through the color marking engine to ensure gloss consistency from page to page. If quality is desired, but the user is willing to accept less than the highest quality mode, a mode can be invoked in which the controller schedules jobs such that facing pages are created on the same marking engine. The system disclosed has redundant black printing capability and a control system 110 that can take advantage of this redundancy in useful ways. The control system can schedule jobs in order to minimize run cost, maximize productivity, maximize image quality, or based on a combination of two or more of cost, productivity, and image quality. at least one marking engine controller, in communication with the scheduler, for controlling the at least one monochrome marking engine to render pages of the print job assigned thereto and for controlling the at least one color marking engine to render pages of the print job assigned thereto. 2. The printing system of claim 1, wherein the at least one monochrome marking engine is selected from the group consisting of black (K) marking engines, custom color (C) marking engines, and combinations thereof. 3. The printing system of claim 2, wherein the at least one color marking engine includes a plurality of colorants for printing images with one or more colorant. 4. The printing system of claim 1, wherein the color marking engine prints both monochrome images and color images. 5. The printing system of claim 1, wherein the plurality of user-selectable print modes are selected from the group consisting of an image quality mode, a productivity mode, an economy mode, and a default mode. assigning pages which include only monochrome images to the at least one monochrome marking engine and to the at least one color marking engine. 7. The printing system of claim 6, wherein when the user-selected print modes is a productivity mode, the scheduler assigns all pages within the print job which include a color image to the at least one color marking engine for printing the color images and for printing any black images that are on pages which include a color image. 8. The printing system of claim 5, wherein when the user-selected print modes is a productivity mode, the print job which include a color image to the at least one color marking engine for printing color images and black images. 9. The printing system of claim 5, wherein when the user-selected print mode is an economy mode, the scheduler assigns all pages within the print job with only monochrome images to the at least one monochrome marking engine, for printing the monochrome images, all pages within the print job with only color images to the at least one color marking engine, for printing the color images, and all pages with both color and monochrome images to at least one monochrome marking engine for printing the monochrome image and to at least one color marking engine for printing the color image. when the user-selected print mode is a first image quality mode and the print job includes only monochrome images, the scheduler assigns all pages of the print job to the same one of the at least one monochrome marking engine. 11. The printing system of claim 5, wherein when the user-selected print mode is a second image quality mode, the scheduler assigns all pages of the print job to any consistent color marking engine selected from the least one color marking engine. 12. The printing system of claim 5, wherein when the user-selected print mode is a third image quality mode, the scheduler assigns all facing pages which include color images to the same one of the at least one color marking engine and optionally also assigns facing pages which include only monochrome images to the same one of the at least one monochrome marking engine. 13. The printing system of claim 5, wherein each of the print modes includes at least one constraint relating to at least one of productivity, image quality and production cost of a print job. 14. The printing system of claim 1, wherein each of the plurality of user-selectable print modes applies at least one constraint to the scheduler and wherein each of the print modes differs from each of the other print modes in at least one constraint. controlling the at least one monochrome marking engine to render pages of the print job assigned thereto and for controlling the at least one color marking engine to render pages of the print job assigned thereto. 16. The method of claim 15, wherein the establishing includes selecting between an image quality mode, a productivity mode, an economy mode, and optionally a default mode. when the established print mode is a fourth image quality mode, printing all color images of the print job on a color marking engine and all black images of the print job on a monochrome marking engine. the printing system executing the print job according to the print mode selected. 19. The xerographic printing system of claim 18, further comprising a finisher which receives media printed by the marking engine of the first type and the marking engine of the second type. 20. The xerographic printing system of claim 19, further comprising a conveyor system which conveys the printed media from the first and second marking engines to the finisher. 21. The xerographic printing system of claim 18, wherein the marking engine of the first type is a black marking engine and the marking engine of the second type is a color marking engine. 22. The xerographic printing system of claim 21, wherein the color marking engine includes a black colorant.
2019-04-18T13:30:44Z
https://patents.google.com/patent/US20060114497A1/en
Welcome to Day Three of the 2017 Positional Power Rankings from FanGraphs. For some background on how these posts work, read the introductory post by Dave Cameron. Click on the links above to examine other positions. The rankings below come from the FanGraphs Depth Chart projections. While the projections spit out specific numbers, these projections are estimates and teams that are within a few tenths of a win of each other have similar forecasts for the season. While I didn’t create the projections, the commentary is my own. Last season was marked by a surge of offense throughout baseball, and this was very much the case for second basemen, who posted one of the greatest seasons of all time for the position. While it might be tempting to point to some sort of emerging group of players set to change the way we think about the position, the evidence doesn’t support that hypothesis. Of the top-eight players, only Jose Altuve will play this season under the age of 30, with many of the best already in their mid-30s. Jose Altuve is the exception, not the rule, as the young star has a sizable lead over his competitors at second. This is the first time in half a decade that the team with Robinson Cano isn’t atop this list. Cano didn’t stumble far and other aging vets fall in line behind him. As far as the order in which clubs appear here, there could be a shakeup before the year is out. A couple teams near the top might be shopping their second basemen if they fall out of contention. If you’re looking for a team to rise, look to the south side of Chicago, where the best prospect in baseball could get his first real shot at a starting job later this season. For the last four years, the team that employed Robinson Cano occupied the top spot in these rankings. The reign that moved from New York to Seattle is no more. Jose Altuve, who is not tall, has the best projection for a second baseman by a quite a bit this year. In 2014 and 2015, Altuve had a 130 wRC+ based almost entirely on contact that stayed in the yard. His walk rate was under 5% and his .129 ISO — based on a large collection of doubles rather than homers. Last season, he kept roughly the same rate of doubles (42) and triples (5), but hit 24 homers and increased his walk rate by 70% without striking out more. The result was a 150 wRC+, good for eighth in all of baseball last season. If Altuve has a flaw — and he does, as I am about to point out — it’s baserunning. His stolen-base percentage is fine. He stole 30 bases in 40 attempts last season and he’s been worth 3.6 runs above average the last two years stealing, but he’s also made 29 outs on the basepaths over the last few seasons while trying to take an extra base. He’s cost himself roughly five runs on the basepaths over the last few years per UBR. That figure beats only 20 players in baseball. Among those 20 players are your Miguel Cabreras, your Prince Fielders, your Yadier Molinas, your David Ortizes, and Albert Pujolses. Of the players in the bottom third in UBR, Altuve’s 3.6 runs stealing is one of just three positive numbers, with Melky Cabrera having gone 5-for-5 on stolen-base attempts and Joey Votto a sneaky 19 of 23 the past few years. Nothing in the last paragraph will prevent Altuve from being very good this year, but it’s possible for him to improve. As for Marwin Gonzalez, he played more than 10 games at all four infield positions last year, including a lot at first base. That probably won’t happen this year. Like Jose Altuve, Tony Kemp is not tall. Unlike Jose Altuve, all of the other things. Three years ago, Robinson Cano signed a 10-year, $240 million contract and, man, did it look enormous. Three years later, Cano has produced 4.4 WAR per season, basically just having a really bad first half in 2015 and otherwise being excellent. His 13.2 WAR total from 2014 to 2016 was the 20th-best mark among MLB position players. If he hits his projection this season and ages like a typical player into this late 30s, he will be worth almost the entire amount of his contract. For a contract as big as Cano’s — and one that goes to age 40 –that would have to be considered a victory for Seattle. If you’re interested in such things, here’s a fact: since Cano left, Yankees second basemen (more on them way down below) have hit for an 89 wRC+ with 2.1 WAR. Speaking of those projections again, if Cano hits 3.6 WAR this year — he was at 6.0 WAR last season, the fifth time he’s recorded a five-win season or better — and then follows a generic aging path (declining by half a win per a year until he hits 38, then declining even more severely), Cano will have 13.3 WAR over the rest of his career. That would bring his career total to just a bit over 62 WAR, ahead of Willie Randolph and Ryne Sandberg and right behind Roberto Alomar, Craig Biggio, and Chase Utley among second baseman. Cano has been incredibly durable, averaging 679 plate appearances per year over the past 10 seasons, so Shawn O’Malley might be doing his utility work elsewhere on the diamond. Taylor Motter matters, mutters Dan Szymborksi, who gave him a decent 1.0 WAR projection. A couple months ago, it didn’t really seem possible that the Twins would be ranked this high. It didn’t seem possible a year ago, either, albeit for different reasons. Last year, Dozier was looking at a 2.7 WAR projection after big nosedive in the second half of 2015 made him an average hitter despite 28 home runs. For the first two months of last season, any pessimism seemed warranted, as he was hitting worse than Jason Heyward when the calendar turned to June. Then, Dozier went crazy the rest of the season, with a 157 wRC+ that trailed only a handful of the league’s best hitters (Freddie Freeman, Mike Trout, Joey Votto, etc.). His 37 homers during that time were four more than second-place Nelson Cruz’s; his 42 homers overall were third-best in baseball and the most ever for an American League second baseman, just one behind Davey Johnson and tied with Rogers Hornsby MLB-wide. A few months ago, all the momentum seemed to be heading towards a deal with the Los Angeles Dodgers, but the Twins couldn’t extract enough value for them to pull the trigger and the Dodgers went elsewhere. Dozier is signed for two more seasons, and with the Twins unlikely to contend this season, Dozier will once again be the target of trade discussions if he continues his great play. It remains to be seen if any offers will feature anyone better than Jose DeLeon, the player whom the Dodgers eventually traded for their own second baseman. As for the backups, Jorge Polanco and Eduardo Escobar both figure to see more time at shortstop this season, but if Dozier were to get traded at some point, one of them would slide over to second, and the Twins would slide down these rankings. Remember in 2013 when the Red Sox won the World Series? You probably do. Do you remember all of the big contributors to that team? Of the 21 players who appeared in 40 games or pitched at least 50 innings, Dustin Pedroia is the only one left expected to have any sort of role on this year’s team. Pedroia is now 33 years old and still has five years left on his deal, but he’s only owed $71 million and he’s coming off a five-win season. Projections for Pedroia are more modest, calling for a decline from last season’s 120 wRC+ to something closer to average. Even an average bat with his normal good defense is going to make Pedroia a three-win player. The Red Sox traded Yoan Moncada in the offseason, so they kicked the question of Pedroia’s replacement down the line. Brock Holt is going to do what Brock Holt does: play a variety of positions semi-competently with a bat a little bit below average. He’s a good player to have on a team, but he’s not a player to get too excited about, unless you really want to. Who am I to tell you how to be a fan? After putting up a four-win season in 2013, Jason Kipnis signed a contract extension and was terrible, suffering from and then playing through an injured oblique. Healthy again in 2015 and 2016, Kipnis put up a pair of five-win seasons with a solid 120 wRC+. The odds that Kipnis will be able to put up another five-win season recently took a hit: Kipnis will be out at least a month with a shoulder problem. We reduced his playing time a bit, but it remains to be seen just how good Kipnis will be on his return. At second base, Cleveland might not even miss Kipnis while he’s gone. The team can go ahead and plug in Jose Ramirez, who had a breakout 2016 season and put up a five-win season of his own. This is the first team we see where the backup actually produces at a rate higher than the player ahead of him. While moving Ramirez to second base doesn’t hurt Cleveland at second, taking Ramirez away from third base hurts the team overall, as Giovanny Urshula represents a drop-off. Also available at second is Michael Martinez, who made the final out of the World Series last year. Sorry for bringing that up. From 2012 to 2015, Ian Kinsler put up a 108 wRC+ and averaged 3.8 WAR and 15 homers per season. Then last season, at age 34, Kinsler put up a 5.8 WAR season with 28 homers and a 123 wRC+. As offense exploded, Kinsler exploded with it. A reasonable estimate of this season’s production would consider last season the outlier when compared to the four seasons coming before it. Add in Kinsler’s age and he’s a good bet to decline. That said, he’s also only making $11 million this season, with a reasonable $10 million for 2018. The combination of Kinsler’s production and his contract could make him trade bait if the Tigers fall out of the race and attempt to begin rebuilding. Andrew Romine, of the baseball-playing Romines, played eight of nine positions last year, including pitcher. He plays decent defense and can steal a base, but he’s not a long-term option should Kinsler get injured or traded. Omar Infante is still in camp but not yet on the 40-man roster so he might spend time in the minors. You might think Infante made the All-Star team a few years ago when Royals fans stuffed the ballot boxes despite Infante being terrible. He didn’t end up making it that year, but he did go back in 2010 for the Braves when he was still good. That Daniel Murphy is something else. In his first 3400 career plate appearances, from 2008 through July 2015, he hit 54 homers, or just under 10 per 600 plate appearances. In the 880 plate appearances since, Murphy has hit 40 homers, roughly 27 homers per 600 plate appearances. His 156 wRC+ last year was fourth in baseball behind Mike Trout, David Ortiz, and Joey Votto. While he’s unlikely to sustain a .348 BABIP and .249 ISO as he enters his age-32 season, he hit just as well in the second half as he did in the first half, and he’ll be a bargain anyway if he regresses back to average. Murphy isn’t great defensively, but even the prospect of poor defense and major regression offensively still makes Murphy a three-win player. The Nationals have a hole at first base, and are unlikely to get great production from either Ryan Zimmerman or Adam Lind. Murphy could provide some flexibility in moving to first base, as his newfound skills with the bat would still play there. There are a few second basemen further up this list who might be available in trade if the Nationals wanted to juggle things around a bit and maintain good production at second base. Stephen Drew, for example, put up a sneaky 124 wRC+ in just 165 plate appearances on the strength of eight homers, but unless you think Drew can hit homers with the same propensity as Daniel Murphy, that production is coming down this year. Drew’s counterpart on the bench is Wilmer Difo, which is fitting, as the latter’s name is an anagram for I’m Drew Foil. Ben Zobrist has won two World Series in a row and has a decent shot of going for three this season. He spent most of the last year’s regular season at second base, but shifted to the outfield in the playoffs to accommodate Javier Baez. Baez was fantastic in the first two rounds of the playoffs last year before cooling off considerably in the World Series. It’s possible, even likely, that he has surpassed Zobrist in the field, but Zobrist is still the superior player overall. In 2016, Zobrist walked 15% of the time, struck out just 13% of the time, and hit 18 homers, a feat only ever matched by Charlie Gehringer, Rogers Hornsby, Joe Morgan, Jackie Robinson, and Lou Whitaker among second basemen. Zobrist’s four-year, $56 million contract seems quickly headed for bargain status after a four-win debut in Chicago. Given the depth the Cubs have, they might not require the 36-year-old Zobrist to tally 147 games or 600-plus plate appearances overall, but his defensive versatility makes it tough to get him too much rest. Baez can handle second base, but there are still questions about his offense as he strikes out a ton and doesn’t take many walks: just 17 last season compared to 129 strikeouts in 521 plate appearances, including the postseason. When Ben Zobrist was drafted, Baez was just 11 years old, and like Baez and Zobrist, Tommy La Stella is a World Series champion. In light of his .239/.315/.379 slash line and 89 wRC+, one might suppose that Joe Panik was pretty bad last year. With a combination of above-average baserunning and good defense, though, Panik still produced an average season, nothing to lose sleep over. His walk and strikeout rates were roughly the same as his good 2015 (136 wRC+) season and his ISO was right around .140 in both years. His average exit velocity both seasons was right around 87 mph, but his line drive and hard-hit rates were down a bit and his BABIP plummeted from .330 to .245, tanking his offensive line and likely causing concern. As most of the numbers from 2015 aligned with 2016, his dip last season isn’t too worrisome. If you’re expecting another .330 BABIP, it’s possible you’ll be disappointed, but if you are thinking something closer to .300, the season should pass without trepidation and he will be close to a three-win player. Between Orlando Calixte, Aaron Hill, and Kelby Tomlinson, we are provided with two excellent names, but no decent production. The Dodgers’ primary second baseman projects roughly a win lower than the Twins’ primary second baseman. Logan Forsythe is an above-average offensive player and an adequate defender at second base. After pursuing Brian Dozier, the Dodgers settled for Forsythe, who’s owed only around $14 million for the next two seasons. Nothing stands out about Forsythe’s game, but between walks, strikeouts, power — even average — he does nothing poorly. Logan’s run with the Rays started off slowly, with a poor half-season’s worth of playing time in 2014, but in 2015 and 2016 he was good for nearly 20 homers each season, recording a 119 wRC+. Behind Forsythe, potential Hall of Famer Chase Utley returned to the Dodgers in a reduced role. The 38-year-old will get the occasional start against right-handers. After 565 solid plate appearances last year (97 wRC+), he’s likely to see a big decline in playing time this season, and it’s not realistic to think he’ll keep producing at the same rate. Enrique Hernandez might steal some time here and there at different positions, but isn’t likely to get a ton of play at second base. New year, same mantra for the Blue Jays at second base: if Devon Travis can stay healthy. If Devon Travis can stay healthy. If Devon Travis can stay healthy. If Devon Travis can stay healthy. If Devon Travis can stay healthy. If Devon Travis can stay healthy. If Devon Travis can stay healthy. If Devon Travis can stay healthy. If Devon Travis can stay healthy. If Devon Travis can stay healthy. If Devon Travis can stay healthy, he has a chance to bolster what could be one of the deeper lineups in baseball, even after the loss of Edwin Encarnacion. In roughly a full season’s worth of plate appearances over the last two seasons, Travis has been worth nearly five wins, putting up a 119 wRC+. There are reasons to think that even if he does stay healthy, he won’t match that output, unfortunately. He doesn’t have a ton of power and he doesn’t walk a whole lot. With average strikeout numbers, he’s very reliant on a high BABIP to sustain production. So far in his career, it’s .354. The only active players with at least 3,000 plate appearances and a BABIP that high are Paul Goldschmidt, Mike Trout, and Joey Votto. Devon Travis is not those players. As the BABIP comes down — assuming Travis stays healthy and recovers from his current knee malady — he should still be an average offensive player and average to slightly above-average player overall. The Blue Jays could really use a full season from Travis, as Darwin Barney isn’t equipped to handle the everyday job and Steve Pearce — while a good hitter, particularly against lefties — isn’t really able to handle second on a long-term basis. After the 11th-ranked Blue Jays, the ole One-Two is occupied by Rougned Odor and the Texas Rangers. Odor has settled in quite nicely following a merely fine rookie season in 2014. In 2015, he hit for decent power while not walking much, but also not striking out that much either — especially for a guy with good power. He ended the season with a 107 wRC+. In 2016, he doubled down on the negatives to try and accentuate the power. He walked just 3% of the time and saw his strikeout rate increase to 21%. It worked, sort of. Odor hit 33 homers last season and his ISO went from .204 to .231. The increased power was great, but as the other stats worsened and run-scoring went up throughout baseball, Odor’s wRC+ remained virtually unchanged, at 106 wRC+. Odor is still just 23 years old and projections see a slight increase on offense and a 2.5 WAR player. As long as he’s healthy, Odor will get most of the playing time, but Jurickson Profar still intrigues at 24 years old, even if injuries have taken most of his promise away. He’ll get most of his time in the outfield, but he can fill in at second if need be. In 162 career plate appearances, Hanser Alberto has a career wRC+ of six. If a player’s wRC+ is written out and not in numerical form, that is a bad sign. Neil Walker was having a very good season as he headed towards free agency last year, with a 122 wRC+ in just 458 appearances through the end of August. Unfortunately, a herniated disc ended his season at that point, and after the Mets made the $17 million qualifying offer, Walker backed his way into New York again. Walker has been an average to slightly above-average player his entire career. Although he couldn’t quite finish his career year last season, there isn’t much reason to think he won’t the same player he’s been his entire career, even in his age-31 season. Walker seems to be healthy this spring, but if he’s not quite back or suffers a recurrence, T.J. Rivera could fill in much like he did last season. Rivera is past the prospect status as a 28-year-old, but he did hit well throughout his minor-league career. He then hit well in 113 plate appearances last season, but his line is very BABIP-dependent, as he doesn’t walk or hit for power. As a utility option, he should do just fine. Kolten Wong’s MLB career has been up and down, and last season was no different. He started off great in the spring as the front office rewarded him with a contract extension. On the field, he slumped to start the season and once again found himself on the wrong side of Mike Matheny, getting demoted to Triple-A just to keep his game sharp. He was a roughly average hitter after he came back in the middle of June, but struggled to find consistent playing time as Jedd Gyorko powered up. There’s still potential for Wong to get better results, and consistent playing time is necessary to give him that chance. Speaking of players who struggled to find their place after signing a contract extension, Jedd Gyorko put up just an 84 wRC+ in 2014 and 2015 and was basically a replacement-level player. The Padres shipped him to St. Louis and even paid part of his salary. Gyorko responded by hitting 30 homers in only 438 plate appearances. He’s always been a low-average player, so even with the power, his wRC+ was only 111. Knock off a bit of the power, and he should be average as both a hitter and fielder, which is a good player to have in a utility role. Gyorko doesn’t have a set role with the Cardinals this year. He will see some time at second and likely some time at third, taking starts from Jhonny Peralta. Everything I just said about Jedd Gyorko’s stat line goes double for Brad Miller, except with a little more playing time. Miller got most of his starts at shortstop, but with the trade of Logan Forsythe and the Rays’ decision to move Matt Duffy to shortstop, Miller moves to second base now, where his glove should play a little better. As for his bat, Miller’s increase in pull rate allowed him to double his career home-run total in just one season. Even if he doesn’t hit for quite as much power, an average bat plus average defense makes for an average player, which seems fitting since we’re the halfway point. Nick Franklin might be good and he might be terrible. From 2013 to 2015, he was a part-time player and he was mostly terrible on offense. Last season, he had a 110 wRC+, but he also got fewer than 200 plate appearances. The projections say that, at 26, he’s a below-average hitter and is somewhere between a replacement-level and one-win player. If he’s an average player on offense, he might be a bit better than that. Tim Beckham (obligatory reference to being No. 1 pick) is also around and could find some time against righties. Average hitter, average defense, average player. In a dreadful 2014 season, Schoop recorded a .244 OBP, a 64 wRC+, struck out 25% of the time, and walked in fewer than 3% of plate appearances. He missed half of 2015, but hit well when he played. Last season, the 25-year-old hit 25 homers on his way to a roughly average season. He doesn’t take walks, which is going to limit his on-base percentage, as well as his overall offensive value. In 2004, Barry Bonds posted two months during which he walked at least 46 times. Jonathan Schoop has 44 career walks in nearly 1500 plate appearances, and he has walked more than five times in a month once, last June. With an OBP below .300, his offensive upside is going to be limited, but if he keeps his strikeouts close to 20% and hits for solid power, that’s an average hitter. This will be Ryan Flaherty’s sixth season with the Orioles. He’s a really bad hitter, but he can play pretty good defense at multiple positions. The Orioles have a lot of outfielders, designated hitters, and first basemen. They have no depth at second base, third base, and shortstop and really need Schoop, Hardy and Machado to stay healthy. Over the last four years, DJ LeMahieu’s walk rate has increased from 4% to 6% to 8% to 10%. The rest of his game’s growth has not been nearly as linear. His strikeout rates ranged between 15% and 18% from 2013 to 2015 before dropping down to 13% last year. His power numbers were flat from 2013 to 2015 — below .100, in each case — before jumping up to .147 last season. In 2013 and 2014, when his BABIP was in the .320s, his wRC+ was in the 60s. In 2015, when his BABIP jumped up to .362, his wRC+ was 89. Then last year, when his BABIP was .388, his wRC+ jumped up to 128. The problem with the BABIP is that the only players to post a BABIP that high over the last five years were Chris Johnson in 2013, and Dexter Fowler and Torii Hunter in 2012. Those players averaged a .337 BABIP the following years. If he can keep his walk and power rates up, he might be a three-win player this year, but repeating his four-win season from last year is unlikely without another boost in walks or power. If he falls back a little bit in walks and power, he’s going to hit the projections you see above. As far as Rockies second baseman goes, LeMahieu’s the DJ, the rest are in the crapper. Josh Harrison has averaged 2.5 WAR per 600 plate appearances in his career; however, he’s also only had one season above 1.5 WAR. That one season was well timed for Harrison, as a five-win campaign in 2014 earned him a guaranteed $27 million. In his other 1500 plate appearances, he’s been worth roughly 1.5 WAR a year, just like he was last season. Harrison doesn’t hit for power and he doesn’t walk, but he plays a decent second base and is a good baserunner. He probably won’t ever get close to his 2014 season again, and he might not even be average, but he’s still a useful player and Pittsburgh never owes him more than $11 million a season even if they pick up his options in 2019 and 2020. Alen Hanson is Pittsburgh’s 10th-best prospect and profiles as a speedy utility player. Ryan Schimpf put up a very surprising 2016 season, hitting 20 homers in just 330 plate appearances as a 28-year-old rookie. Everything Schimpf did last season was outsized. Strikeouts? Sure: 32% of the time. Walks? Yep: in 13% of his plate appearances. When Schimpf came to the plate in 2016, there was a better-than-50% chance you would see one of three outcomes: strikeout, walk, or home run. Of the 268 players with at least 300 plate appearances, Schimpf’s .315 ISO was best in the majors, 10 points more than David Ortiz’s own mark. All the strikeouts and a low batting average meant that Schimpf could still only muster a 129 wRC+. That’s a good number but not one he’s to likely repeat over the course of a full season. Add in an oblique injury this spring and the 40% strikeout rate he recorded during final month of 2016, and it’s reasonable to temper expectations for this season. Schimpf should also see some time at third base, leaving some plate appearances for Cory Spangenberg. Both Schimpf and Spangenberg are lefties, but there should be some sort of role for Spangenberg, who profiles as a decent player but lost his job due to injury last year. Asuaje, acquired in the Craig Kimbrel trade, got in a few games last year. He has a decent bat and, according to Eric Longenhagen, profiles as a low-end regular. The Marlins ranked 13th on this last season. Generally speaking, a decent-sized drop like this would be attributed to some sort of change in personnel. That isn’t the case for these Marlins, though, as they have almost exactly the same players at second entering the 2017 season. The main difference is the status of Dee Gordon. A year ago, Gordon was coming off a five-win year that was aided by a .383 BABIP. I realize I keep harping on BABIP, but when we see these breakout seasons and the only number to change is BABIP, be wary that it will continue. Players have control over their BABIP, but extremely high BABIPs — particularly ones out of character — come back to earth. Gordon’s 2016 season was marred by a PED suspension, but his numbers weren’t great when he was playing. That BABIP went down to .319, and even though his walk rate went up a bit, the 60-point BABIP drop led to both a decline in OBP and a 72 wRC+. The projections see a slight rebound in BABIP with decent defense, but that only adds up to a 1.5 WAR for Gordon. Derek Dietrich is a solid bench option, and will get near-regular playing time at a bunch of different positions. Last year, the Phillies were last in these positional power rankings. The projections didn’t believe much in Cesar Hernandez, barely placing him above replacement level. Hernandez was coming off a season during which he recorded a 92 wRC+ and 1.4 WAR in 452 plate appearances. Last year, Hernandez put up a really fluky four-win season. He’s unlikely to repreat the 13 UZR he posted. His .363 BABIP is also a candidate for regression. On the positive side, Hernandez takes walks and he does seem to have decent batted-ball skill. There’s also room for him to become more efficient as a baserunner: he was just 17 of 30 on stolen bases in 2016. If some of his UZR was real and he hits the ball hard, Hernandez might just double his projection. That’s the hope, anyway. If you’re reading this from start to finish, it’s probably been quite a while since I mentioned the Yankees. Here they are again. If you only looked at a couple stats and those stats were batting average and home runs, you might see Starlin Castro’s line from last year and say, “Hey, 21 homers and a .270 average is pretty good for a second baseman.” You might believe Castro had a good year. He did not. Castro doesn’t walk and recorded only a .300 OBP despite his other strong points. He’s not a good baserunner, so he can’t add extra value there. There is some room for optimism, perhaps, on defense. Castro wasn’t a great shortstop, but even mediocre shortstops are typically pretty good at second base. There’s room for growth there, maybe. Castro is owed $31 million over the next three seasons. While that’s a substantial commitment, it’s hardly onerous. On a rate basis, the projections are actually seeing an improvement for Castro this season due to a better number on defense and similar offensive numbers. Given how long Castro has been around, it’s hard to believe he turns only 27 at the end of this week. Ronald Torreyes will serve in a utility role for the Yankees this season, but not much is expected. Projections have always seemed to like Rob Refsnyder due to solid minor-league numbers, but the Yankees seem to disagree, never really giving him a chance at the starting second-base job despite lackluster results at the position since Robinson Cano left. The Reds were ready to move on from Brandon Phillips. Given that his name has yet to appear in these rankings, moving on appears to have been the wise choice. Jose Peraza, acquired in the trade that sent Todd Frazier to the Chicago White Sox, turns 23 next month and had a decent debut last season. He doesn’t walk or strike out a lot. He’s fast, but couldn’t turn that into a great stolen-base rate last season, stealing 21 times in 31 chances. The Reds are in a spot competitively where it makes a lot of sense to see what they have in a player like Peraza. That said, they also have Dilson Herrera, acquired from the Mets in the Jay Bruce trade. Herrera is the same age as Peraza, and while he might not have quite prospect shine that Peraza once had, he’s also young. Unlike Peraza, Herrera has a little bit of pop, reaching double-digit homers in each of the past four seasons. If you see Arismendy Alcantara and wonder if he’s the same Arismendy Alcantara who was a legitimate prospect with the Cubs a few years ago, yes, it’s that same Arismendy Alcantara. How many Arismendy Alcantaras do you think there are? We’ve definitely reached the most exciting name near the bottom of these rankings — and no, it’s not Tyler Saladino, even if Saladino’s currently the guy at the top of Chicago’s second-base depth chart. No, the real guy is Yoan Moncada, arguably the best prospect in baseball. Signed for $31.5 million by the Red Sox (who had to pay another $31.5 million in penalties to sign him), Moncada was traded in the deal that sent Chris Sale to Boston. Moncada tore up High-A and Double-A last year before getting a call-up at the end of the season. If projections could talk, ZiPS would say Moncada could hold his own right now, while Steamer would reply in the negative. Projections can’t talk, of course. I can, however, so I’m just going to say, “Get excited.” He’s going to be a fun player to watch. If Moncada tears up Triple-A, we might see him in the big leagues by the end of May. As for the current starter, Tyler Saladino looks to be the biggest beneficiary of the White Sox’ decision to move on from Brett Lawrie. Saladino was a slightly below-average hitter in half a season last year. He’s easily better than replacement and has a shot at average if things break right. He could move over to third base if the White Sox move Frazier and call on Moncada, so he might end up with a full season’s worth of plate appearances. With Jose Abreu at first, Moncada at second, Tim Anderson at shortstop, and Tyler Saladino, you could squint and see a credible infield. In the Astros-A’s Jed Lowrie cycle, we currently finds ourselves in Phase IV. Lowrie was on the Astros for one season in 2012. Then he was on the A’s for two seasons in 2013 and 2014. Then he was on the Astros in 2015. He was with the A’s last year, and he’ll rejoin the team this season, as well. His move to the Astros in 2018 is inevitable, however: don’t try to fight it, no matter how much you might want to. As for this season, Lowrie’s hitting numbers have gone down three straight seasons, to a 77 wRC+ last year. He turns 33 this season, and he’s coming off a below-replacement-level season. The projections say he’ll be above replacement this year. Adam Rosales hit 13 homers in just 248 plate appearances last year. He wasn’t particularly good in his last go-round with the A’s, and he struck out 36% of plate appearances last year, so it appears he’s adopted something of an all-or-nothing approach. On the plus side, he also walked a lot and hit a bunch of homers, so in limited time, it’s an approach that just might work. I don’t know if people ever refer to Joey Wendle as “Mr. Wendle.” What I do know, however, is that if he doesn’t make the A’s, he’s going to be taken to another place — more specifically, to Tennessee, where Oakland’s Triple-A club is located. Now we get to Brandon Phillips. The former Red, once traded for Bartolo Colon, has had a pretty good career. Phillips turns 36 this year, though, and hasn’t produced an above-average offensive season since 2012. Also, his once excellent defense is likely slipping with age. If his offense doesn’t erode anymore, and his defense is still sufficient, he just might turn in an average season in the Atlanta suburbs. The Braves were 28th in these rankings last year, with Jace Peterson projected as the primary second baseman. Peterson is still around, but not expected to contribute much beyond a utility role. If Braves fans can wait just one more year, Ozzie Albies might help them move up the rankings for 2018. The projections aren’t huge believers in Jonathan Villar, on account of him (a) being not very good in his first few years as a major leaguer and also (b) recording a .373 BABIP in 2016 that’s due for regression. Villar showed improved power last season, with 19 homers to go along with 62 steals. Villar is making the transition this year from shortstop to second base to make room for Orlando Arcia, and Villar’s glove is likely a better fit away from shortstop. He’ll also likely see some time at other infield positions. Villar hits a ton of ground balls, so he relies on finding holes or amassing infield hits to be productive. With his speed, average offense is a reasonable expectation. Some improvement on defense, however, will be required for him to profile as an average overall player. Last year’s second baseman, Scooter Gennett, is still around, but he’s been replacement level the past two seasons and it’s only fair to expect more of the same, especially in a reduced role. Hernan Perez might just play enough everywhere to come close to qualifying for the batting title, a title he probably won’t win. Yadiel Rivera is a good fielder. These projections say Danny Espinosa isn’t a very good baseball player relative to other major-league baseball players. They might be wrong, though. Espinosa has decent power, hitting 24 homers last year. He’s a good fielder — almost certainly better than average at second base. He walked at a good rate this past year, but even his 9% mark undersells his on-base ability a little bit, as he’s also demonstrated the ability to get in the way of pitches thrown anywhere inside. If he can keep that OBP a little north of .300, play solid defense, run the bases well, and avoid double plays, he might be an average player. Luis Valbuena, who isn’t terrible, might get some starts at second, although he’s more likely to see time at first and third base. The projections really don’t like Cliff Pennington. Not fans of Kaleb Cowart and Nolan Fontana, either, though I’ll defer my judgment until I’m further convinced of their existence. After a decent cameo in 2015, Ketel Marte was set to be the Seattle Mariners’ starting shortstop in 2016. He occupied that role for a while, but he just couldn’t get positive results at the plate, hitting .259/.287/.323 for a wRC+ of 66. Low power, low walks, low average is no way to get through a season. Marte has decent speed, but couldn’t get on base enough to use it. When the Diamondbacks traded Jean Segura to the Mariners for Taijuan Walker, Marte came along for the ride. He doesn’t have a definite starting role, and he should be better than last year, but he’s going to have to improve a lot to be a worthwhile player. The position could also go to Brandon Drury. Last season, Drury got most of his playing time in the outfield, but he has experience at third base and second base, as well. He posted a solid 102 wRC+ in 499 plate appearances last season with close to 50 extra-base hits. Despite that performance, projections don’t see that power sticking, regarding the 24-year-old as something closer to a replacement player. Arizona is known for its warm temperatures. This decade, the Royals have given more second-base plate appearances to Omar Infante than any other player. He recorded a 60 wRC+ in Kansas City. Second in second-base plate appearances is Chris Getz with 1,098. His wRC+ was 66. Nobody else has received more than 500 plate appearances and, in total, Royals second basemen have put up a wRC+ of 72, their 44 homers barely besting Brian Dozier’s total from just last year. The tradition continues. The Royals got rid of Infante, but they haven’t gotten better. Mondesi is just 21 years old, which is interesting, but also means he should probably spend more time in the minors. Christian Colon is 27, making him less interesting, but still not very good. On we go, to Whit Merrifield. The Royals are last for a reason, and that reason is the players. You can blame the front office if you want, though the team is just one year removed from a World Series title. We hoped you liked reading 2017 Positional Power Rankings: Second Base by Craig Edwards! If I was a betting man I’d take the over on WAR for Nationals, Cubs, and Rockies. i am a betting man, and no way in hell am i taking that action.
2019-04-23T02:48:12Z
https://blogs.fangraphs.com/2017-positional-power-rankings-second-base/
That’s why it’s so important to make sure you have a morning ritual where you’re focused on feeling good and thinking about your goals. The more you can feel good and experience strong emotions, while thinking about what your want, the faster you’ll be able to manifest it. ±show ▼detected; convicted Monthly Subscription “If you want to be heard, talk quietly”. English Personal Growth Exercises Browse Stocks Know that your relationships with people are bad because you made them that way. New York, NY PRODUCTIVITY & TIME MANAGEMENT Forms And this contradiction makes them feel fear, anxious, worry… Change can be a long, long time coming, but when it comes, it’s the work of a moment. I always act as if that moment will be today. This belief shift alone has proven priceless for me and for a great many of my clients and students. In basic terms, the law of attraction states that your thoughts & belief systems send certain “vibrations” out to the cosmos. In turn, the universe responds by giving you a kind of customized made-to-order set of experiences which directly validate said thoughts and beliefs. Inspirational Quote Tipps parenting Dennis William Hauck Now, imagine you are going to use the money to buy, to paid or to give whatever you wanted to. GUEST POSTS Whatever it was, you were perfect just as you were. And so much of that unique energy is still within you today. For most people it’s hidden, like buried treasure waiting to be discovered. Beakthrough to Success Online Great! Stick around and we’ll get YOU on track for manifesting YOUR dream job. Bahasa Melayu My dream today is to have “Nestor and me, the food addict” become a beacon of hope and recovery for English-speaking readers as well. With this in mind, I have retained my publication rights for all English speaking countries. Great Deals on Attracting what you want is a law of nature, not a miracle! I mean? Come on. Working with Jen and the Mastermind has been the best decision for my business and life. You can increase your magnetic power by devoting time to “powerful thinking.” each day. Understanding the Nature of the Brain When does the course start? Also, an important note: Sometimes the kindness, compassion and respect you show yourself may need to be a little challenging. Don’t be afraid to call yourself occasionally. When you give yourself a much needed wake up call, the law of attraction will flow into your life effortlessly. SEMINARS if faith is cultivated it will achieve mastery ~ John Paul Jones Both the manifestation determination review AND the IEP team meeting to develop an assessment plan for an FBA or revise an existing behavior intervention plan can be done at the same time. The 20th century saw a surge in interest in the subject with many books being written about it, amongst which are two of the best-selling books of all time; Think and Grow Rich (1937) by Napoleon Hill and You Can Heal Your Life (1984) by Louise Hay. Soup Whatever it was, you were perfect just as you were. And so much of that unique energy is still within you today. For most people it’s hidden, like buried treasure waiting to be discovered. Emotional Intelligence 2.0 To have an incredible relationship with my girlfriend that continues to grow daily. The second thing is, do you believe it’s going to happen? Once you master these two dynamics, you’ll be able to build what I call a belief bridge, from where you are now to any parallel universe you choose. Scroll Week #10: Radical Self Compassion and Brutal Boundaries (65:48) I think so, but i’m not sure ^ Jump up to: a b c Whittaker, S. Secret attraction Archived 2016-03-04 at the Wayback Machine., The Montreal Gazette, May 12, 2007. Here are just a few areas in your life that you could improve by utilizing The Law Of Attraction. We use this field to detect spam bots. If you fill this in, you will be marked as a spammer. YES! Send me these. Jump up ^ Hedesan, Georgiana D. (July 2014). “Paracelsian Medicine and Theory of Generation in ‘Exterior homo’, a Manuscript Probably Authored by Jan Baptist Van Helmont (1579–1644), ref 52”. Medical History. 58 (03): 375–96. doi:10.1017/mdh.2014.29. PMC 4103403 . PMID 25045180. n expression without words Read The Latest Transformational Courses Put out any note from your pocket. Clean Boutique Fatherhood, Fear, Guest Posts Understanding the Nature of the Brain All these methods are means to help you feel good and to train you to feel prosperous. The law of attraction is the attractive, magnetic power of the Universe that draws similar energies together. It manifests through the power of creation, everywhere and in many ways. Even the law of gravity is part of the law of attraction. This law attracts thoughts, ideas, people, situations and circumstances. November 13, 2011 4.6 out of 5 stars 7,440 Once you have your list, it’s time to amplify your signal to the universe by asking for what you want. When the universe is clear on what you want to manifest, then it can help you. If you don’t ask, it will still try to help you, but it guesses as to what you truly desire. Guided Meditations Hypnosis, meditation, energy healing, affirmations, reiki, prayer, NLP, Qi gong, yoga, mindfulness, etc…etc… And why not reading your opinions? Whats wrong with that? Just becausw we are not on the same page it does not mean that reading it I will be sucked into an unwanted parallel reality. In fact I enjoyed the thinking process that started after reading your article. Linking The connection you then create will deliver you a great prize. Health, happiness, purpose and financial abundance. Above all, true personal freedom. The greatest single wish I have for you and for all of our awesome team. Miscellaneous Step 9. Teach Yourself What It Means To Have Money Many people find a spiritual awakening in those possibilities. Connecting with the rhythms of the universe and opening up to new potentials awakens the spiritual force inside you that is connected to everything around you. The Law of Attraction demonstrates that you are connected to everything and everything is connected to you. Kindle Short Reads INTEGRATIVE HEALTH So long as we’re competing as personal development trainers or abundance coaches and NOT boxing. Obscure Shapes I remember being in month three of my journey in Madrid, Spain in mid-August, dripping in sweat, wearing a heavy backpack, over-thinking where I wanted to sleep. In the middle of a ruminating fit, I realized that I couldn’t make a wrong decision and I had an epiphany: it didn’t matter! I tapped into an emotion I so rarely felt but desperately wanted… freedom. This hot, ludicrous moment outside of the Prado National Museum, etched itself in my heart. I genuinely got it—I couldn’t make a wrong decision. No matter what I chose, if I didn’t like it, I could change it again. Feeling this in my bones lifted a pattern of control that haunted me for years. Of course! These people couldn’t handle their goal because psychologically they weren’t programmed or ready to meet the terms and conditions of this goal. Yes, they certainly were financially rich, however, they still operated from a “financially poor” mentality. And it’s this mentality that caused their downfall. Now on each line list those things that you already have brought into being: your relationship, your children, your marriage, your successes… And, if you care to, the failures you have created in your life through the decisions and choices you’ve made. 6 min read 5.0 out of 5 starsA Must Read!!! Publication Date: June 11, 2007 How To Manifest Abundance in Your Life What is it that you desire? Manifesting requires true desire, but not just external, material desire, but core-feeling desire. In order for us to know what we desire on a truly spiritual level, we must go within. It is there where we find that most of the time our desires aren’t physical but more emotional. We desire love, compassion, understanding, etc. Take time to sit with yourself and determine what your soul craves. Getting closer to the deadline to place our bid, my husband felt we shouldn’t bother. He was sure the house would go for higher than we could afford, factoring in the needed renovations. Placing a bid would be a waste of time. Manila grass Live long (and many lives) and prosper Jesse Taylor. I likened conjuring from thin air to threading a needle. It required all of my commitment. I had to focus every ounce of my being on making this my reality. This exercise not only can manifest exactly what you want, it can also help you to You can choose to think differently. The universe knows what you love and what you hate. It knows what you believe. If you start from belief and truth (any truth, no matter how small and fragile a gem it is) you can manifest miracles VERY quickly. This is my affirmation and I see it come true every single day. Word forms: manifestations It’s also important to understand that sometimes these “bad thing” that may happen to you, are actually blessings and part of the universe helping you get what you want. For example, let’s say what you want is to make more money. You’re focused on it and you’re feeling good. Then what happens is you get fired from your job. Horrible! How could this happen? You see, perhaps the universe is going to give you a better job or career in the future, and you had to lose your job in order for you to get what you want of making more money. Understand? Where do you feel it in your body? Cambridge University Press Be Selfish! man•i•fes•ta•tion manikin First Dates 2) No Purpose: Material abundance and wealth are the most important manifestations to attract. The Universe sets your life purpose. You pick the specific goal based on wants; not values. This is one reason there is less passion driving goal completion because these are not deep-seated principled goals. Both faith and love are important because they help direct your focus and attention to the right things when your world seems to be falling apart. They allow you to operate out of an attitude of non-resistance where you shift from a state of “doubt” to a state of “believing” and “knowing”. There is suddenly certainty behind your thoughts, decisions, and actions. And it’s this certainty that will help pull you in the right direction. Self-Publish with Us Oxford Dictionaries Premium No. Different animals in the colony had different manifestations of the disease. 5.0 out of 5 starsLove it Computers are always trying to emulate what the mind already does. The thing many people forget is that the mind came before the computer and will always be way more powerful. 4 Easy Tips to Get Inspired & Rejuvenated Employment Opportunities You can access these (plus more) in Law of Attraction Origins: Your Personal Source Of Limitless Power. It’s packed with simple DIY techniques that deliver measurable results in every aspect of your life. And, if my intention of doing all this stuff is to get something I don’t have… A peek inside Lila’s journey to healing her skin. Thrive Global B.O. Once a student’s cumulative suspension totals 5 days, it is recommended that the IEP team holds a manifestation determination review to determine if any changes to the IEP are needed and/or if a behavior intervention plan needs to be revised or developed. Another meeting must be held if the student continues to be suspended and reaches 10 cumulative days of suspension for the school year. Annick Follow me on Instagram: @manifestationbabe Laisse moi te dire une chose. Si tu crois que l’univers s’est crée par une explosion qui s’est faite toute seul dans le grand rien ou il n’y’a pas d’oxygène, c’est que tu es complétement à côté de la plaque. Jésus, Bouddha, Krishna, Einstein en ont parler. Que se soit la religion ou la science, la loi d’attraction marche. La croyance comme l’athéisme appartiennent à Dieu. Les croyants ne voulant pas utilisé tout les cadeau terrestres que Dieu leurs a offert, se sont les non croyant qui l’utilisèrent à bonne essient. Mais le top étant les scientifiques qui CROYAIENT en Dieu est on été les meilleurs scientifiques. ALBERT EINSTEIN, EDISON, TESLA, MARTIN LUTHER KING, GHANDI, LADY DIANA, KENNEDY ! Parcque contrairement eu croyant qui attendent que tout leurs tombe tout cuit dans le bec et au non croyant qui ne crois pas. Eux ils ont été des croyants qui ont compris qu’il fallait utilisé la nature que Dieu leur avait offert. Aide toi et le ciel t’aidera. Tout autre scientifique non croyant n’ont été bon qu’a crée la destruction. Fermer Selon vos projets, vos souhaits, peut-être aurez-vous dans votre entourage des personnes qui vous tiendront des discours décourageants comme « ce n’est pas possible » ou « tu ne vas pas y arriver ». Bref, ce genre de propos sont en général révélateurs de 2 typologies de personnages différents : les individus qui s’inquiètent pour vous et ceux qui vous envient. “L’argent doit être gagné” Remplacez cette phrase par : “L’argent vient facilement et fréquemment”. Au début cela sonne comme un mensonge. Une partie de votre cerveau dit : “Tu es un menteur”. C’est dur ! Sachez que ce petit match va durer un certain temps. Donate with PayPalYe Olde Swag ShoppeSupport on Patreon Service Directory Suivre Bonjour Christian, ton article me fait penser à la phrase : « aide toi et le ciel t’aidera ! ». COACHING INDIVIDUEL Enter your email address and you will get Back NATIVE consciousness NATIVE demystified NATIVE expanding NATIVE folk NATIVE garment NATIVE grooming NATIVE humility series NATIVE inward day NATIVE kitchen NATIVE lifestyle NATIVE manifestation NATIVE matriarch NATIVE reads NATIVE remedies NATIVE travel NATIVE week no. Une fois que vous aurez accepté le fait que vous attirez absolument tout ce qui vous arrive, votre vie va changer. Mais vous allez dire : “Alors je dois contrôler toutes mes pensées ? Cela ne va pas être facile.” Cela peut paraître difficile au début, mais c’est là qu’entre en action votre système de guidance émotionnelle. Celui-ci est un véritable indicateur qui vous permet d’appréhender si vous êtes sur la bonne voie, car vos pensées crées vos sensations. Oui, les émotions sont cet incroyable cadeau que nous avons, pour prévoir ce que nous allons attirer dans le futur. 99% des gens ne sont pas capables d’appliquer la loi de l’attraction dans leur vie, car c’est TRÈS difficile pour l’esprit humain de focaliser sur le positif. Évidemment, c’est tout à fait normal d’avoir des petites rechutes de négativisme parfois, mais il est important de s’en rendre compte et de «changer de fréquence» immédiatement en cherchant le positif dans la situation donnée, ou bien simplement choisir de ne pas se laisser affecter. Oui probablement… Ensuite je trouve que les principes de Hill peuvent être compris très facilement en observant ne serait-ce qu’un peu son environnement. 6)Pourquoi alors les adages: »le malheur ne vient jamais seul », »quand on parle du loup on voit sa queue », »qui se ressemblent s’assemlent » et j’en passe. A très bientôt et merci pour ce commentaire très enrichissant ! Revenir à la page Wikipédia:Accueil principal. Julie Simmons’ Sun Sign Predictions L’Amour apaise. SEE ALL Informations sur notre Marketplace LOL , c’est vrai ! J’ai des fou-rires non stop depuis tout à l’heure ! Digital A pattern is defined as: Wir haben die Tage noch ein Video von der Philadelphia Show (USA Tour 2012) geschickt bekommen, das wir gern mit euch teilen wollen 😉 Dankeschön noch an Fastbreak Records!!! Considérer qu’elle n’a pas assez cru en la loi de l’attraction. Ainsi, elle n’a pas attiré les bonnes vibrations et c’est pour cela que ça n’a pas fonctionné. Elle continuera ses exercices avec deux fois plus de motivation. Jul 12, 2018 Nothing will get your blood boiling like scrolling past a negative comment or receiving a message from an internet troll. It’s so easy to go right into reaction mode and respond to this “hater”… but I want to share an alternative point of view. Before you react, understand that a hater only means you’re on the right track, you’re becoming more successful, AND this person is just hurting. They’re looking for that outlet, and that outlet is YOU. Listen to this episode to pick up on my 4 tips on how to handle haters. For more! W. Clement Stone and Napoleon Hill wrote Success Through a Positive Mental Attitude (1960). In film and theatre production the concept is called “mise en scene” or arranging a scene. Author ❤ Mentor ❤ Speaker Since the brain is Velcro for negative experiences, it is natural that we worry so much. It’s just the brain’s tendency. Keep a worry list for 2 weeks. The minute you start to worry write it down. This not only helps release the heavy energy that often keeps us stuck, but at the end of 2 weeks you will notice none of the worries were warranted. Your brain will have proof that worry is a waste of energy. November 23, 2017 Share your thoughts with other customers Manifestation determinations must be made when there is a change in the student’s placement based on the student’s suspensions. Abundance Tip Number 31 – The shocking thing about walking your true path Why is meditation such a powerful anxiety reliever? From building neurotransmitters, to quieting mind chatter, to cooling the amygdala, this highly in-depth article discusses why anxiety is no match against meditation. Classic Sakara Insulated Tote Comment Kindle Store Don’t Offer Resistance …and thanks for sharing your story, that’s bravery. It’s a simple read, and the argument/proves for the law of attraction is clearly outlined. Having read it, it’s now time to go practice; more when I practice and see results. Your thoughts are therefore a magnetic filter that brings to your conscious awareness everything that is aligned with the magnetic pulling power of each thought. As such, your circumstances are either good or bad depending on how you’ve attracted things into your life via your dominant thinking patterns. It really is all about energy. I hope you’re seeing and feeling that now? reddit German When I tell people that this is what it really means to believe in a law of attraction, they don’t believe me. They say, that’s ridiculous. We don’t control everything in the universe. But you are a perfect example of the negative, blame the victim side of the LOA. I understand, appreciate, and respect that maintaining this perspective is consistent with your beliefs in the LOA. My personal belief is that this is not healthy for you, others with whom you connect, or for society in general. Mind Power Cart Total:$0.00 Your first step is to set a clear outcome/goal that you would like to achieve. Ask yourself: SUCCESS ADVICE19 hours agoWhat You Can Learn From My Ultimate “I Am Screwed” Moment. Email * Get Help Books Lucky people don’t do the ego dance, which is when we go out into public or we’re around somebody new and low self-worth leads us to question: “Who should I be? Will they love me? What do I say for them to love?” Magnetic people don’t do that. They share a dissociation from that ego dance and they’re just presently, authentically, vulnerably themselves. Reply to Stan Side note: This is the exact tactic I used to manifest the love of my life and our relationship is word for word how I wrote it. #freaky Read all about it in How to Manifest the Love of Your Life. Atlanta, GA Train The Trainer Worry, fear, anxiety, doubt or resistance in the form of limiting beliefs pollute and dilute your vibration. Check out the Love or Above Spiritual Toolkit and learn how happiness will make you a magnet for what you desire. NATURE As you work toward your goal, it may question if manifesting actually works. You might get discouraged and frustrated. If you are sitting in the struggle and wondering when things are going to happen you aren’t trusting the process. When you question manifestation, you’re telling the universe to prove manifesting doesn’t work. 1. Imagine how you want your day to go down to the last detail How To Let Go Of Fear And Anxiety This law also intertwines with some of the other universal laws, in particular, the Law of Concentration. As you focus on giving to others, your brain puts a primary importance on these things you are focusing on. And as a result, you start identifying more things and opportunities that are related to the things you are giving. It then comes down to your ability to take advantage of the opportunities you are presented with in order to receive that which you originally gave to others. Because when you can master energy and use it to cause ripples, you get to choose the life you want to attract. Your personal value to the world becomes enormous. And that value comes right back at you in whichever form you choose. Become an Attraction Catalyst (a secret life coach) and abundance becomes child’s play. Now we know that radio waves travel through space and through the use of the right tools, become sounds or pictures, however, how do they travel? How is it possible for them to travel as they do. It is because there is an ether or zero point energy that permeates and penetrates everything around us. It is possible,through this ether for vibrations to travel huge distances. How does this play a part in manifesting desires? Services But don’t call yourself a guy that likes to look at the “data” when you can;t even prove the data yourself. The point of the Chalkboard Method is that it’s visual and simplified, so you can hang it above your desk or in your workspace and clearly see the clients or opportunities coming your way. Here’s another of Mike Dooley’s Notes From The Universe. 11 Scots You are either a pessimist, a realist or an optimist. Or you could very well be a combination of these three in certain situations. No matter what your combination is, you are always thinking thoughts. And it’s these thoughts that attract either problems or opportunities into your life. And there is a very simple explanation why this happens. Blocks To Reality Creation This is the idea that the Law of Attraction is based on. If you want to bring about something in your life, regardless of what it is, begin vibrating at a level that is congruent with your desired reality. Essential British English fear that the money will isolate you, distancing you from your peers Know Your Mind Power 5.0 out of 5 starsMoney well spent! Your Life. MANIFESTATION • PARTNERSHIP & MATCHA MORNINGS VIDEO Los Angels riders My friend with his multiple knife wounds also ran and there were bandages everywhere as he made a run for it (I’m not even sure how he was able to move). Nederlands When I was seventeen, an intuitive told me to pick up a book on manifestation, to read it and follow it to a T, and that I’d be able manifest everything I want. So I read the book and did what I was told. Nothing happened. I read The Secret and the Law of Attraction books that we’re all sort of peripherally familiar with…and still not much in that realm was helping me. A lot of it was: Think positive; your thoughts control your reality. Visualize. Especially when there are some methods that manifest so well and so quick… So, in sum: for most people who want to manifest money, the reason why success doesn’t immediately arrive is simple. The concept of money comes with a lot of baggage, and that baggage can block your positive intentions! Abundance Tip Number 51 – Simple miracles in one minute You can choose to experience more of the things that make you feel good. Abundance on a world scale is no different. Ask Jack Audio Recording If you’ve been struggling to manifest the life you want then help is at hand! When you decide on something specific to manifest, it’s vital that you know exactly why you want this specific thing in your life. And when you’re trying to manifest something in just 24 hours, you also have to pick something you believe you can manifest in a day. This was only a manifestation of his extremely refined cleverness. In some ways it’s almost like they have to grab onto any explanation with which they can convince themselves that there’s a reason why manifestation works. Products on Abundance Buy New This can be broken down into 3 main areas: And the great thing is that, because of the money back guarantee, you don’t even need to take our word for it. Because you can try it out at absolutely no risk whatsoever. We think that Manifestation Miracle is definitely worth a try. After all, you’ve nothing to lose except for the negatives in your life. And that, you have to admit, is a pretty powerful reason to get started. Well done Heather Matthews, we think Manifestation Miracle really packs a punch…! Listen to this 4 min audio introduction. Over the last 25 years, ever since I read “As a Man Thinketh” by James Allen, I have been a passionate student of the art, and some would say science, of Abundance. With often miraculous results. Does fear arise? Bingo. If so, there is a belief there about a limit to your abundance. This link can help you let go of it. Amazon Affiliate Disclaimer This should be required viewing before you read and review the Manifestation Miracle: Is this number going to bring me where I want to go? Or like trying to play the poker game of life with one card… There is a beautiful “catch 22” with The Law of Abundance that I feel it necessary to mention. 272 Rating details “That which is like unto itself is drawn.” – Abraham-Hicks © 2014 Positively Positive, LLC. All rights reserved. All trademarks and service marks are the property of their respective owners. In Demand Career Guide Review 15. For every goal, ask yourself “What needs to change so that I have a stronger belief that this is possible?” (then act to make those changes). Digital Educational Quote Emily – am a succesful speaker, author and facilitator of seminars to empower young en older people how to feel worthy, go for their dreams, they are loved and supported by source in their daily lives in countless ways. Along with the ebook and audios, we get this lovely workbook which helps us implement all that we learned. It’s a fill in the blanks workbook to review and take action on what we learned. Don’t think there’s just one secret. 5. The Money-Back-Guarantee I found that the real work is not to meditate on money, but to achieve a positive emotional state. Reply to Zool If you sit down in a quiet space and think about your hopes and dreams, what do you feel? Do you feel fear, anxiety, and doubt? Or do you feel happy, with a sense of contentment? I have a loving, intimate relationship with my husband who is also my best friend; Okay, now let’s talk about how we can synchronize ourselves with The Law of Abundance and positive thinking. You will find total joy from sharing your happiness with others. Heather provides you with some great ways to do this that are easy to understand and simple to do. The chapter finishes with a really great happiness exercise. Don’t just imagine yourself counting money and feeling it in your hands. Don’t just imagine holding your soul mate in your arms in a loving embrace. For a change, stand back and imagine watching these things on a TV in your mind. Be the star of your own romantic comedy and see yourself in your mind’s eye giving the performance of your life. Be sure about what you want and when you do decide please don’t doubt yourself. Remember that you’re sending a request to the Universe which is created by thoughts and therefore responds to thoughts. Know exactly what it is that you want. If you’re not clear/sure, the Universe will get an unclear frequency and will send you unwanted results. So be sure it is something you have strong enthusiasm for. I thought that this was inspiring and worth watching. May the force be with you! Blessings for 2011! This is a 21-day workbook to help get you even more in tune with the vibration of the universe and manifest what you want into your life. This workbook has one ‘focus task’ that lasts for one week in total, and four shorter focused exercises to complete every day. There is no set time to do these tasks, so don’t worry about making time in your schedule, just find time in your schedule. that you’re capable of more… How To Clear Your Energy Patterns So Your Problems Around Money, Health… You already know there is a 21-day workbook, so you have to know that you are not going to manifest everything you want tomorrow. That’s part of what makes this course so credible. It clearly helps you to raise your vibrational energy and get into a place where you are more connected to the universe and everything that you want to attract. While the topics themselves aren’t necessarily new, the way they’re explained definitely is. Videos: Vishen Lakhiani’s Becoming Limitless Program Review Then one day I saw a TV presentation that helped me start thinking a bit differently. That’s why you will want to know about how the Manifestation Miracle system can work for you. Read More To those who are simply interested in improving their lives, in whichever way that may be, I think this is something you should definitely check out. Mostly because the whole M.M. course is so well-produced and well-paced it is actually a rather enjoyable read. Business amp Leadership 5 Habits of Successful People How can I identify a fake or scam coin? Alexander Technique Your Relationship with God Updated: Ready to start growing your website traffic? Lets do this by signing up ! Week #19: Breaking Institutions, Breaking Free (61:36) Synonyms of ‘manifestation’ Also check out: By choosing that trip, many doors opened for me that would’ve never become available otherwise. That’s how we manifest; by choosing something radical and committing to making it happen—no matter what! by Tanaaz Testimonial Love At #NATIVEmanifestationtribe How Does The Law Of Attraction Work? 3. Related changes Last week was a crazy, whirlwind, up and down energy for me that led me to get sick, unable to sleep for 4 nights, and my resting heart rate was over 100bpm. I think that’s the normal physiological response to a massive uplevel and manifestation of something that’s been on your vision board for 10 years, right?! Definitions Respect your true value and potential and soon you’ll be happier than you ever imagined. And you’ll be able to sell your value to the world as effortlessly and naturally as Mother Theresa and Martin Luther King. (Yes they were both A+ level salespeople, or rainmakers if you will). #NATIVEWEEK No. 74 Salary Schedules Turn Failure into Success: Using the Law of Attraction to Overcome Obstacles And again let me repeat. You don’t have to build Rome in a day. One single penny of energy invested in yourself each day for 30 days = over $10 million coming back to you in terms of abundance, love and joy. Select Page Season everything with truth and emotion. If you believe it, it will show up. #NATIVEWEEK No. 73 To be able to donate $100,000 year to charity. June 18, 2018 The thing is, though, most of us are not accustomed to taking control of our lives, or even really making a serious effort to manifest what we want. Obscure Shapes CITE ī #NATIVEweek No. 69 Health Disclaimer Jump up ^ Byrne, Rhonda (2006). The Secret. Beyond Words Publishing. ISBN 978-1-58270-170-7. Mindvalley Whatever you want is purely for your enjoyment. It’s the icing on the cake, a little extra to make your life sweeter. But you don’t need it. When you read about the Law of Attraction, it can sometimes feel like it will take months or years to manifest anything you desire. However, experts advise that if you carefully work your way through four distinct steps, it’s possible to get results a lot more quickly. In fact, if you are wondering how to manifest anything in simply 24 hours, you may only need 5 steps. Quote Minn What I didn’t mention earlier was that this was actually called the “Passion Project” and they were given an hour and a half each week to work on something they were passionate about. She found her topic through research and discovered that about half the dogs at shelters were there because they were the wrong type for the family. Listen: But manifestation in its purest form is the responsibility of the universe, or more accurately the universal laws that can create any outcome if instructed to do so in the correct way. Books & Journals February 2014 Save favourites A new Youtube video answering this Q! The parent of the student has not allowed an evaluation of the student; This is why the universe is such an infinitely beautiful place. The Law of Attraction dictates that whatever can be imagined and held in the mind’s eye is achievable if you take action on a plan to get to where you want to be. 9. Let Go And Let God MONEY 3. Reframe Your Limiting Beliefs 6 Physical Steps To Attracting Love: Things You Can Do Right Now Create and share your own word lists and quizzes for free! Submitted by Jesse Taylor on September 19, 2016 – 10:17am My evening routine gets me SO excited for bed, sets me up for success, and it’s only THREE steps. Tune in to listen to the breakdown & don’t forget to share with me what resonated most about this podcast episode by tagging me on Instagram! From Latin manifestare (“make public, declare”). Jump up ^ Henry, Juliette. “How can it possibly be that the law of attraction works?”. www.tameyourmind.org. Transformation guide. Retrieved 3 Dec 2016. Silveigha Spirituality (26) Be grateful for what you have, be grateful for what is on the way to you, and be grateful as you start to see the Universe lining things up for you. Start with the small stuff. If you ultimately want to leave the job that you hate, that’s going to take time, and there are tools to support you. But some things may be easier, like choosing not to hang out with a friend who makes you feel like the sidekick. Start to distance and create boundaries and call people in who make you feel great. J Edens “You cannot request or manifest a relationship as we desire. There is a beautiful component of wishing, manifesting, etc. that people often forget: free will. One cannot make someone do what they want them to do. … To manifest a new relationship, one needs to create a list that can easily be fulfilled. A client of mine once made a list of the qualities she wanted in a man, but she listed each request as, ‘I want a man who… and a man who… and a man who…’ She ended up becoming involved with three different men with each of the qualities requested, not one man with all three qualities. Manifesting can be tricky,” Rappaport warns. 1.4 Noun Posted on February 1, 2018May 28, 2018 So give yourself ten minutes, and paint yourself a real nice picture. Let yourself daydream and think about what you really, truly want. You can also start a manifestation book our journal: whatever works best for you! Travis Bradberry This is a powerful technique for building intuition. Set your sights on the sky and beyond. February 12, 2016 I recently manifested a desire this way. I felt like sharing a bottle of whiskey, whiskey I didn’t pay for, with a friend and followed the above directions. The very next evening, a friend called and invited me out for a drink. My budget was very limited so I wasn’t planning on staying long. My friend, out of the blue, orders a bottle of whiskey and pays for the entire thing, mixers and all!! I got exactly what I asked for. A shared bottle of whiskey. “If you want to be heard, talk quietly”. Develop an unflappable belief. Very likely you haven’t manifested what you desire because deep down you don’t believe that it could be yours. This is absolutely common and there is nothing bad about it. Nevertheless, if you lack the belief that it can be yours then it CANNOT manifest. With your disbelief you block the manifestation. Let’s say that you have a clear vision with strong desire but you don’t believe that it could happen for you. Then what happens is like saying; ‘’Actually, I cannot have it. I don’t believe that it can be mine.” The Universe responds; “so be it”. Why don’t you believe that it can be yours? Dig deep within yourself to find out what you belief is and then work on changing it. Can you work on manifesting in tandem with therapy? Divine Purpose AFFILIATE MARKETING As you can see, I have some items on my list that are a 1 – I really want those things, like making $100k/mo and driving a lambourghini, but I doubt myself that I can make it happen within the next year – maybe a few years down the road. There’s also a few things on my list that I have high belief and confidence for, such as owning an apartment in Yaletown and traveling Europe, and having a great relationship. Perfect – these are things that I will be able to manifest and attract much sooner and easier, since I’ll be sending out positive intentions and frequencies for them, and there won’t be a conflict of negative counter-intentions of doubt and fear. Does that make sense? Now that you have spent the time to research your idea, develop a timeline for the idea and structured a plan with specific tasks to accomplish this idea. You must make a commitment to yourself to carry out the tasks on your list and to strive to complete everything on time. by Richard J ONeill31 What happens? Marlayna said on February 27th, 2018 Hours Previous: These are all examples of manifestation. The problem isn’t learning how to manifest, we all do it all the time. The true challenge is rather how we can manifest consciously so we don’t feel powerless and like we’re dependent on a luck or any external factor. Siena Tuscany Hair Our systems have detected unusual traffic from your computer network. Please try your request again later. Why did this happen? CHILDREN’S BOOKS Nothing happens randomly (even this attack). Everything happens for a reason and when you ensure you get the lesson from it, you can go on to do extraordinary things. You already know the value of hard work, but you don’t have to make such a big deal out of manifesting. Think of it as a game, as a pure game of pretending like you did when you were a child. Approach manifesting from the perspective of play, delight, fun… to take your needy, desperate, fearful and lack-based vibes out of the equation. 5.2 References (#48) My Evening Routine For Lasting Success Start Here Can you take advantage of this law? Yes, you can! Find yourself a quiet place with no disturbance. Science has theorized that time does not exist. It’s simply a figment of our own imaginations — a figment of our collective consciousness — and perspectives of the world. In fact, the memories you have in your head give you the impression that time exists. Without those memories, there would be nothing but the present moment. The past would not exist, and the future would not exist either. Without memories there would just be “right now” and nothing else. Reply Link to this page: API Children’s Books A group of 4,000 meditators volunteered to meditate on peace and love to reduce the amount of crime in the high-crime Washington, DC area. A team of scientists and researchers approached the project without bias and tested for every variable imaginable. ISBN-10: 1401912273 ī Have you ever met a writer (or entrepreneur or artist) who can tell you about 20 different ideas for books they’re “going to write”, but they can’t tell you’ve one that they’ve completed and put out there? In this lil’ ole video, you’ll learn: Exactly…How Much Work Should Our Relationships Take? According to Harry, manifestation is just your way of asking the universe to “grant you permission” to acquire something. “If you want to manifest the love of your life, it’s best to open your mind and ask the cosmos exactly what you are looking for,” she says. Add to Cart Oakland, CA 中文 Others being successful doesn’t limit your success. And by attracting abundance to yourself, you are not limiting another, according to the book. This book shows you why you can only act as you do. It relieves the mental pressures you assume necessary to carry out all your choices and actions. days after you enroll ‘As this column demonstrated last week, this polarisation is extreme and has clear-cut economic, cultural and political manifestations.’ My hope is that it inspires YOU toward your next win. bring sth forward The Law Of Attraction As I grow more conscious of my authentic self, of my “purpose” and my soul’s desires for this lifetime, I’m constantly faced with an internal battle: Do I Let Go of Depression or Do I Embrace it? “My Bulletproof Approach To Building Or Breaking Any Habit FAST!” You really can attract much more money than you ever dreamed possible, and this amazing little book will teach you exactly how to do it step by step. Audio (US) There you go. Start today and make the rest of your life a continuous flow of prosperity and abundance. The flow of money is there for the taking. You have to know how to play it and embrace the process coming from the solution within you. In the case of a student being recommended for expulsion, a manifestation determination review is conducted and an IEP team is held to update the IEP for the student. If the student does not have an IEP and a disability is suspected then an assessment plan should be offered. 3 Crimean Tatar 12) We’re Not Perfect: The LOA is a “perfect law” and should result in a “perfect” life. We are told that no goal is too big if you can think it; there is no such thing as an unrealistic goal. From The Secret “You can think your way to the perfect state of health, the perfect body, the perfect weight, and eternal youth. You can bring it into being, through your consistent thinking of perfection.” Reality check -life is not perfect. It can be great, fantastic, amazing, incredible, even optimal. But perfect? Won’t happen. What’s the problem with this way of thinking? Why not expect perfection? Fantasizing and striving for perfect makes you feel better in the short term but actually reduces your chance of attaining your goals and results in more unhappiness and blaming. If you are only going to be satisfied with perfect results – perfect health, perfect body, perfect family, perfect marriage, perfect friendships, perfect kids, perfect house, perfect job, perfect life; you are in for a perfect disappointment. Research studies support this. There’s always the next step it’s up to you to put one foot forward then the next. Put the required activities on your calendar, set reminders there’s no excuse to miss something with today’s technology put it to good use! Rhymes: -eɪʃən “Be careful what you wish for because you just may get it,” is not a statement to joke around with. This law is so powerful your request could manifest instantly and powerfully without warning. Remember, this Law could be used to create or destroy. Others getting the exact thing that you want is a sign that what you want is going to manifest soon. But if you feel upset, angry or jealous then you will stop the manifestation from taking place. That’s your current setting. Maybe you believe you can grow your wealth by $10 next week. Maybe you believe that figure could be $10,000. Employment Opportunities Editors Pick To receive a monthly round-up of my favorite posts, tools, and free bonuses, add your name to the list below. Find the best broker for your trading or investing needs Most of us are limited with multiple negative beliefs about money. Medical students’ disease Categories Again, rather than experiencing pride or feeling boastful, this is about understanding and seeing how the Law of Attraction is already a part of your life. It’s always existed, but now you are aware of it. Learning how to consciously use the Law of Attraction to enhance your life is a process that takes time. Not only is it a skill that must be cultivated, but the actual manifestations will take time to come about as well. Patience is one of those key qualities of living a successful life. Course Log-In Vince Lombardi Jr. Up until a couple months ago, I truly thought I was lacking the skills needed in order to manifest things. In my crazy-town brain, I felt that things just didn’t work out for me like they worked out for other people. Living in a spiritually based world, I hear words like manifest, miracles, magic, dreams, angels, etc., on the regular. I buy into all of these concepts, but to be very honest, I never really felt like those concepts worked for me. There in itself lied my problem. I was essentially blocking myself! I began to talk to my spiritual running buddies, started reading more books, and became more open to the concept that I am capable of living that kind of life. And guess what? It worked!
2019-04-22T14:19:23Z
http://howtomanifest.org/2016/11/
2009-09-25 Assigned to C. R. BARD, INC. reassignment C. R. BARD, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YOUNG, ROBERT, NISHTALA, VASU, BRACKEN, RONALD L. A securement device, system, and method for use with a medical article. The securement device, system, or method may include a body that has a top surface and a bottom surface. The bottom surface has an adhesive compound thereon. A resilient retainer formed from a soft, tacky elastomeric gel or foam is supported by the bottom surface of the body. The resilient retainer receives and secures a medical device. The medical device is secured to the skin of a patient upon affixing the bottom surface to the patient via the adhesive compound. The present invention relates to a system for securing medical devices to a patient. Medical patients are often in need of repetitious administering of fluids or medications, or repetitious draining of fluids. It is very common in the medical industry to utilize medical tubing to provide various liquids or solutions to a patient. For example, catheters may be used to direct fluids and/or medications into the bloodstream of the patient, or withdraw fluids from the patient. Often, it is desirable to maintain such catheterization or medical tube insertion over an extended period of time during the treatment of a patient. In some instances, a medical article may be attached to a patient for a lengthy period of time, requiring minimal movement for proper functioning. It is often advantageous to restrict the movement of the medical tube or article, particularly when the medical article is to be administered to the patient over an extended period of time. A medical article that is not securely attached to the patient may move around, which may cause discomfort or injury to the patient, restrict the administering of fluids or medications or the draining of fluids, cause infection, or become dislodged from the patient unintentionally. It is common for medical providers to affix the medical article to the patient and to attempt to restrict movement of the medical article by taping the medical article to the patient's skin. Medical articles commonly attached in this way include medical lines, luer locks or other types of connectors. Securing a medical article with tape, however, has certain drawbacks. Tape used to secure a medical article, for example at an insertion site of the medical article on the patient's skin, can collect contaminants and dirt. Such collection of contaminants and dirt can lead to infection. Normal protocol therefore requires periodic tape changes in order to inhibit germ growth. Periodic tape changes may also be necessary when replacing or repositioning the medical article. Frequent tape changes lead to other problems: excoriation of the patient's skin and adherence of contaminant's to the medical article. Repeated removal of tape can excoriate the skin and cause discomfort to the patient. Additionally, removal of tape can itself cause undesired motion of the catheter device upon the patient and irritation of the patient's skin. Repeated applications of tape over the medical article can lead to the build up of adhesive residue on the outer surface of the medical article. This residue can result in contaminants adhering to the medical article itself, increasing the likelihood of infection. To add to this, residue buildup on the medical article can make the medical article sticker and more difficult to handle for medical providers. In addition to these drawbacks, tape also fails to limit medical article motion and, therefore, contributes to motion related complications like phlebitis, infiltration and catheter migration. Consequently, there are many problems with using tape to secure a medical article. It is desirable to avoid directly taping a medical article to a patient. There is a need to provide a simple, yet effective device for securely holding a medical article in place on a patient's skin, while avoiding aggravating the site at which the medical article is mounted. With the increased concern over rising health care costs, there is also a need for simple and less expensive alternatives to safely securing medical articles. Therefore, a need exists for an improved medical article securement system for use with a patient that overcomes the problems associated with current designs. One aspect of the present invention involves a securement device for a medical device. The securement device includes a body and a resilient retainer. The body has a top surface and a bottom surface, and the bottom surface has an adhesive compound thereon. The resilient retainer is formed from a soft, tacky elastomeric gel or foam and is supported by the body. The resilient retainer is adapted for receiving and securing a medical device, where the medical device is secured to the skin of a patient upon affixing the bottom surface to the patient via the adhesive compound. Another aspect involves a method of securing a medical device to a patient. The method includes providing a securement device and a resilient retainer for the medical device, where the securement device includes a body having a top surface and a bottom surface, the bottom surface having an adhesive compound thereon. The resilient retainer is formed from a soft, tacky elastomeric gel or foam and is supported by the bottom surface of the body. The resilient retainer is adapted for receiving a medical device. The method further includes locating the medical device on the resilient retainer, and securing the securement device and medical device to the patient with the body via the adhesive compound. In one form, the foam is formed by curing an organopolysiloxane composition. In another form, the organopolysiloxane composition includes a vinyl-containing high viscosity organopolysiloxane or a blend of high viscosity vinyl-containing organopolysiloxanes, a low viscosity organopolysiloxane or a blend of low viscosity organopolysiloxanes, a reinforcing filler, a platinum catalyst, and a hydrogen containing polysiloxane copolymer. Yet another aspect involves a securement system. The securement system includes a flexible body member and a tacky gel pad. The flexible body member has a first surface and a second surface located opposite the first surface, and the first surface includes an adhesive configured for attachment to a patient. The tacky gel pad is supported by the flexible body member and is configured to deform when pressed against a medical article, where the gel pad inhibits at least lateral and longitudinal motion of the medical article when the flexible body member is attached to the patient. FIG. 1A is a perspective view of a securement device in accordance with a preferred embodiment of the present invention and shows a gel pad. FIG. 1B is a perspective view of the securement device from FIG. 1A with a release liner attached. FIG. 2 is a bottom view of the securement device from FIG. 1A. FIG. 3 is a top view of the securement device from FIG. 1A. FIG. 4 is a side view of the securement device from FIG. 1A. FIG. 5 is a front view of the securement device from FIG. 1A. FIG. 6 is a perspective view of the securement device from FIG. 1A positioned above a medical article placed on a patient's skin. FIG. 7 is a perspective view of the securement device from FIG. 1A secured over the medical article. FIG. 8 is a cross-section view taken along line 8-8 of FIG. 7 and shows the gel pad deformed about the medical article. FIG. 9 is a perspective view of the securement device from FIG. 1A secured over another medical article. FIG. 10A is a perspective view of a securement device in accordance with another embodiment of the present invention and shows a plurality of gel pads. FIG. 10B is a perspective view of the securement device from FIG. 10A with a release liner attached. FIG. 11 is a bottom view of the securement device from FIG. 10A. FIG. 12 is a top view of the securement device from FIG. 10A. FIG. 13 is a side view of the securement device from FIG. 10A. FIG. 14 is a front view of the securement device from FIG. 10A. FIG. 15 is a perspective view of the securement device from FIG. 10A positioned above a medical article placed on a patient's skin. FIG. 16 is a perspective view of the securement device from FIG. 10A secured over the medical article. FIG. 17A is a cross-section view taken along line 17A-17A of FIG. 16 and shows the plurality of gel pads laterally deformed about the medical article. FIG. 17B is a cross-section view taken along line 17B-17B of FIG. 16 and shows one of the plurality of gel pads longitudinally deformed about the medical article. FIG. 18A is a perspective view of a securement device in accordance with another embodiment of the present invention and shows a gel pad. FIG. 18B is a perspective view of the securement device from FIG. 18A with release liners attached. FIG. 19 is a bottom view of the securement device from FIG. 18A. FIG. 20 is a top view of the securement device from FIG. 18A. FIG. 21 is a side view of the securement device from FIG. 18A. FIG. 22 is a front view of the securement device from FIG. 18A. FIG. 23 is a perspective view of a securement system in accordance with an embodiment of the present invention and shows the securement device from FIG. 18A attached to a patient's skin, a medical article placed on the securement device from FIG. 18A, and the securement device from FIG. 1A positioned above the medical article. FIG. 24 is a perspective view of the securement system from FIG. 23 secured about the medical article. FIG. 25 is a cross-section view taken along line 25-25 of FIG. 24 and shows two gel pads deformed about the medical article. FIG. 26 is a cross-section view similar to FIG. 25 except that the securement device from FIG. 1A does not include its own gel pad. FIG. 27 is a cross-section view similar to FIG. 25 except that the securement device from FIG. 18A does not include its own gel pad. FIG. 28A is a perspective view of a securement device in accordance with another embodiment of the present invention and shows a gel or foam pad with a hole formed therethrough. FIG. 28B is a perspective view of the securement device from FIG. 28A with a release liner attached. FIG. 29 is a front view of the securement device from FIG. 28A. FIG. 30 is a perspective view of the securement device from FIG. 28A and a medical article positioned above a patient's skin. FIG. 31 is a perspective view of the securement device from FIG. 28A and the medical article attached to the patient. FIG. 32 is a cross-section view taken along line 32-32 of FIG. 31 and shows a space between the medical article and the gel or foam pad filled in with a gel or foam. The following description and examples illustrate embodiments of the present securement system in detail in the context of use with several exemplary medical articles. The principles of the present invention, however, are not limited to the illustrated medical articles. It will be understood by those of skill in the art in view of the present disclosure that the securement system described can be used with any number of articles and medical devices, including, but not limited to: catheters, connector fittings, catheter hubs, catheter adaptors, fluid delivery tubes, and other medical devices or their components, and electrical wires and cables connected to external or implanted electronic devices or sensors. One skilled in the art may also find additional applications for the devices and systems disclosed herein aside from use with the medical articles and devices mentioned above. Thus, the illustrations and descriptions of the securement system in connection with the medical articles are merely exemplary of some possible applications of the securement system. The securement system described herein is especially adapted to arrest lateral and/or transverse movement of a medical article, as well as hold the medical article against the patient. The securement system accomplishes this without meaningfully impairing (i.e., substantially occluding) fluid flow through a medical article such as a catheter. As described below, the securement device to accomplish this includes, among other aspects, a tacky gel or foam retainer configured to deform when pressed against a medical article. The securement system may further inhibit longitudinal motion of the medical article. For example, the gel retainer may be deformed about the medical article such that a longitudinally facing surface of the medical article abuts the gel retainer, whereby the gel inhibits longitudinal motion of the secured portion of the medical article. In addition, surface friction between the gel retainer and the medical article can inhibit longitudinal motion and/or rotation of the medical article with respect to the securement system. As will be additionally described below, when the securement device is pressed over a medical article, the gel retainer contacts the medical article and may compress and deform to accommodate an outer surface of the medical article. The outer surface may have a tubular, conical, or any other shape as explained below. By this, a portion of the medical article may be surrounded and closely held by the gel retainer to form a stable mount. Because the medical article may be held on a plurality of sides, movement of the medical article is inhibited. In some embodiments, the securement system releasably engages the medical article. This allows the medical article to be disconnected from the securement system, and from the patient, for any of a variety of known purposes. For instance, the medical provider may want to remove the medical article from the securement system to ease disconnection of two connected medical articles or to clean the patient. In some embodiments, at least one securement device of the securement system is not destroyed during disengagement of the securement system. In this way, the securement device can be reused. It is not limited to use for only one medical article, but can be used multiple times for the same medical article or sometimes for different medical articles. The securement system can further be used with multiple medical articles at a single time. For example, two medical lines could be secured by at least some embodiments of the device. The two lines need not be arranged along the same axis to be secured by the device. The securement system is configured to secure medical articles having a plurality of different shapes and/or sizes. The gel retainer may conform to the shape of a portion of the medical article, thereby allowing medical articles of different sizes and shapes to be securely held on the skin of the patient. For example, the securement system may be used to hold a substantially linear medical article such as a drainage tube against the skin of the patient. The securement system may additionally be used to secure a medical article with an elongated body and a laterally extending surface, such as a winged catheter, or other medical articles that are not substantially linear, for example. The securement system is further configured to be positioned in a multitude of orientations at a multitude of locations on the patient's body. As described below, the securement device to accomplish this includes, among other aspects, an adhesive configured for attachment to the patient's skin. Depending on the location or desired orientation of the medical article being secured to the patient, the orientation of the securement system can be adjusted and configured by the medical provider. To assist in the description of components of the securement system, the following terms are used. Unless defined otherwise, all technical and scientific terms used herein are intended to have the same meaning as is commonly understood by one of ordinary skill in the relevant art. As used herein, the singular forms “a,” “an,” and “the” include the plural reference unless the context clearly dictates otherwise. Also, the terms “proximal” and “distal,” which are used to describe the present securement system, are used consistently with the description of the exemplary applications. Thus, proximal and distal are used in reference to the center of the patient's body. The terms “upper,” “lower,” “top,” “bottom,” “underside,” “upperside” and the like, which also are used to describe the present securement system, are used in reference to the illustrated orientation of the embodiment. For example, the term “bottom” is used to describe a surface of a device that is located nearest the skin of the patient. The term “alkyl” refers to radicals having from 1 to 8 carbon atoms per alkyl group, such as methyl, ethyl, propyl, butyl, pentyl, hexyl, octyl and the like. The term “alkenyl” refers to radicals having from 2 to 8 carbon atoms such as, vinyl, allyl and 1-propenyl. The term “aryl” refers to mononuclear and binuclear aryl radicals such as, phenyl, tolyl, xylyl, naphthyl and the like; and mononuclear aryl alkyl radicals having from zero (i.e. no alkyl group or a bond) to 8 carbon atoms per alkyl group such as benzyl, phenyl and the like. The term “monovalent hydrocarbon radicals” includes hydrocarbon radicals such as alkyl, alkenyl and aryl. The term “tacky” refers to an adhesive property that is somewhat sticky to the touch, enabling a gel pad or sheet padding to be readily attached to a limb or other area of a patient's body yet easily removed, i.e. to be releasably attached. The term “macerating” means to soften the skin over a period of time, especially as a result of the skin being wetted or occluded. The term “limb” refers to the paired appendages of the body used especially for movement or grasping, including the legs, knees, shins, ankles, feet, toes, arms, elbows, forearms, wrists, hands, fingers or any part thereof. The term “curing” refers to any process by which raw or uncured polysiloxanes containing reinforcing agents are convened to a finished product, i.e. to form a soft, tacky, reinforced polysiloxane elastomer. With reference now to FIG. 1A, an embodiment of a securement device 10 includes a body member 20 a and a foam or gel retainer. The foam or gel retainer is attached to the body member 20, and in the illustrated embodiment the foam or gel retainer is configured as a gel pad 30 with a thickness that protrudes from the body member 20 a. For ease of illustration, the securement device 10 is shown upside down in FIG. 1A. Thus, the gel pad 30 is actually attached to a bottom surface of the body member 20 a. For ease of explanation, like reference numerals are used throughout the figures to indicate like features. Individual letters are added as a suffix to the reference numerals when describing individual or varying embodiments of the features. For example, body members 20 a and 20 b may comprise like features, as described below, but may be embodied in different configurations, such as characterized by a different shape or size. As will be described further below, at least a portion of the body member 20 a visible in FIG. 1A may comprise an adhesive. In addition, the gel pad 30 may have a tacky property. FIG. 1B shows the securement device 10 with a removeable release liner 40 a attached. The release liner 40 a covers the adhesive and the gel pad 30 of the securement device 10. The release liner 40 a may resist tearing and may be divided into a plurality of pieces to assist removal of the release liner 40 a and ease attachment of the securement device 10 to a patient. In the illustrated embodiment, the release liner 40 a is sized similarly to the body member 20 a. The release liner 40 a may, however, be configured as another size or shape. For example, the release liner 40 a may be configured such that its edges are exposed beyond the securement device 10 to provide a grasping edge for easy removal of the release liner 40 a. In the illustrated embodiment, the release liner 40 a is shown as including tab 42 which can be grasped when removing the release liner 40 a. The release liner 40 a may be made of a paper, plastic, polyester, or similar material. For example, the release liner 40 a may comprise a material made of polycoated, siliconized paper, or another suitable material such as high density polyethylene, polypropylene, polyolefin, or silicon coated paper. A bottom surface 22 of the body member 20 a, shown in a bottom view of the securement device 10 in FIG. 2, comprises an adhesive. The body member 20 a may be configured as an adhesive dressing, or an adhesive may be coated onto the bottom surface 22. In the illustrated embodiment, the adhesive is formed over the extent of the bottom surface 22. In other embodiments, the adhesive may only partially cover the bottom surface 22. For example, the adhesive may be formed as a solid layer or as an intermittent layer such as in a pattern of spots or strips. The adhesive comprises a compound configured to adhere to the skin of a patient. For example, the adhesive may comprise a medical-grade adhesive that is either diaphoretic or nondiaphoretic, depending upon the particular application. In one embodiment, the adhesive comprises one of the TEGADERM line of adhesive dressings, manufactured by 3M. As described above, the adhesive may be covered with a release liner prior to use. A top surface 24 of the body member 20 a, located opposite the bottom surface 22 and shown in a top view of the securement device 10 in FIG. 3, may be smooth, textured, or a combination of the two. In one embodiment, the top surface 24 is textured to allow a medical provider to more easily handle and apply the securement device 10. The body member 20 a is configured to be flexible. When placed over a medical device, the body member 20 a may be conformed to the shape of the medical device and/or a patient on whom the medical device is placed. The body member 20 a may comprise any number of flexible materials. In one embodiment, the body member 20 a comprises a foam (e.g., closed-cell polyethylene foam) or woven (e.g., tricot) material. The body member 20 a may be integrally formed, or may be formed as a laminate structure with a bottom layer providing the bottom surface 22 and a top layer providing the top surface 24. In such laminate structure, one or more intermediate layers may be formed between the top layer and bottom layer. For example, a suitable laminate that comprises a foam or woven material with an adhesive layer is available commercially from Avery Dennison of Painsville, Ohio. In one embodiment, the upper surface 24 is provided by an upper paper or other nonwoven cloth layer, and an inner foam layer is placed between the upper layer a lower layer providing the adhesive. The body member 20 a may be configured in any number of sizes and shapes. For example, the foam or gel retainer may be attached to a first portion of the bottom surface 22 so that a second portion of the bottom surface 22 is attachable to a patient. As can be seen in a front view of the securement device 10 in FIG. 5, lateral portions 26 a and 28 a extend beyond the lateral edges of the gel pad 30. When the gel pad 30 is placed over a medical article, one or both of the lateral portions 26 a and 28 a will contact and adhere to the skin of a patient such that the securement device 10 and the medical article is attached to the patient. The lateral portions 26 a and 28 a may be sized to provide a sufficient surface area to attach to the patient such that the securement device 10 will not detach when the secured medical article is manipulated or adjusted during normal movement of the patient. In the illustrated embodiment, the ends of the lateral portions 26 a and 28 a are rounded. In other embodiments, the end of one or both of the lateral portions 26 a and 28 a is squared, pointed, or is configured as another shape. The foam or gel retainer of the securement system 10 has a suitably high coefficient of friction and hardness for securing a medical device. The foam or gel retainer is configured to deform when pressed against a medical article and may encase a portion of the medical article. The foam or gel retainer may be a soft die-cut material. The size and shape of the foam or gel retainer is not limited to the illustrated embodiments. For example, the foam or gel retainer may be formed into a pad, as with the gel pad 30, or may be shaped as a post configured to secure a medical device, for example a catheter. The foam or gel retainer may be rectangular, oval, circular, trapezoidal, or square, although other shapes can be employed depending upon the particular application. In one embodiment, the foam or gel retainer comprises a viscoelastic memory foam. The memory foam may be made from polyurethane with additional chemical additives that add to its viscosity level, thereby increasing the density of the foam. Depending on the chemicals used and the overall density of the foam, it can be firmer in cooler temperatures and softer in warmer environments. As will be appreciated, higher density memory foam will react with body heat to allow it to mold itself to the shape of a warm body. The memory foam may be configured to distribute pressure when placed over a medical article, and in some embodiments is configured to retain heat, thereby increasing pain relief in some patients. Those skilled in the art will understand how to construct the memory foam from the foregoing description. The foam or gel retainer may comprise a cured, tacky, reinforced polysiloxane elastomer. In such embodiment, the gel pad 30 may be formed by curing a mixture of a lower alkenyl-functional polysiloxane, such as a vinyl containing polysiloxane, and a hydrogen containing polysiloxane copolymer containing active hydrogen groups. In this regard, the term “hydrogen” refers to active hydrogen that is directly bonded to a silicon atom (Si—H), for example, silicon hydrides and hydrogen containing organopolysiloxanes. Such amounts of the hydrogen containing polysiloxane copolymer will be dependent upon factors such as the molar ratio of alkenyl radicals to active hydrogen in the uncured composition and the nature of these components, including such variables as polymer chain length, molecular weight and polymer structure. The organopolysiloxane elastomers disclosed herein, prior to curing, have a ratio of hydrogen to alkenyl radicals of less than 1.5, or 0.5 to 1.2, which imparts tack or tackiness to the end product produced therefrom. The tackiness is believed to be caused by the partially crosslinked organopolysiloxane elastomers. It should be recognized that the tacky gel pad 30 possesses the requisite tacky property throughout the entire gel pad 30. However, surface tack can be modified to be greater than or less than the interior tack. Quantitative measurements of tackiness can be made using a suitable tack tester, such as a Polyken® probe tack tester, a rolling ball tack tester, a peel tester or combinations thereof. Tack can be tested with the Polyken® probe tester in accordance with any suitable procedure, such as American Society For Testing and Materials (ASTM) Designation: D2979-71 (Reapproved 1982), Standard Test Method for Pressure-Sensitive Tack of Adhesives Using an Inverted Probe Machine, pp. 187-189, from the Annual Book of ASTM Standards, Vol. 15.09. The Polyken® probe tack tester is the trademark of the Kendall Company, under license by Testing Machines Inc., Mineola, Long Island, N.Y. Tack can also be tested with a rolling ball tack tester in accordance with Pressure Sensitive Tape Council, Test Methods for Pressure Sensitive Tapes, 9th Edition, PSTC-6, revised August, 1989, pp. 29-30 or ASTM D3121. Tack can also be tested with a peel tester in accordance with Pressure Sensitive Tape Council, Test Methods for Pressure Sensitive Tapes, 9th Edition, PSTC-1, revised August 1989, pp. 21-22. The tacky, cushioning layer can be artificially aged prior to tack testing using conventional accelerating aging procedures, such as by exposing the layer to ultraviolet light, elevated temperatures and/or elevated humidity. The tacky gel pad 30 disclosed herein has little or no ability to induce maceration of the skin, due in part, to its permeability for transporting water vapor from the skin through the gel pad. Thus, the tacky layer disclosed herein can provide a third, tri-function of inducing little or no maceration when applied to the skin for an extended period. One test method for evaluating water vapor transmission is ASTM Designation: E96-80, Standard Test Methods for Water Vapor Transmission of Materials, edited May 1987, pp. 629-633. Determinations of the hardness of the gel pad 30 can be made with any suitable durometer for testing hardness. One test method entails resting the edge of a Shore 00 durometer on a material, applying a presser foot to the material without shock and taking the average of three readings. Further details for testing hardness can be found in ASTM Test Method D2240. One of ordinary skill in the art will appreciate that elastomers measured by the Shore 00 durometer scale are softer than those measured by the Shore A durometer scale. Representative vinyl-containing high viscosity organopolysiloxanes of formula (1) suitable for preparing a base material include, but are not limited to the following. Representative low viscosity organopolysiloxanes of formula (2) suitable for use in preparing a base material include, but are not limited to the following. The base material prepared from the vinyl-containing high viscosity organopolysiloxanes of formula (1) and the low viscosity organopolysiloxanes of formula (2) can be admixed with a copolymer containing dimethyl and methyl hydrogen siloxanes. The amount of hydrogen-containing organopolysiloxane used should be sufficient to achieve a ratio of alkenyl radicals to hydrogen in the uncured composition of less than 1.2. The elastomers are reinforced with a suitable reinforcing agent or filler such as titanium dioxide, calcium carbonate, lithopone, zinc oxide, zirconium silicate, silica aerogel, iron oxide, diatomaceous earth, silazane-treated silica, precipitated silica, fumed silica, mined silica, glass fibers, magnesium oxide, chromic oxide, zirconium oxide, aluminum oxide, alpha quartz, calcined clay and the like, as well as various reinforcing silica fillers taught in U.S. Pat. No. 3,635,743, the contents of which are hereby incorporated by reference in their entirety, or mixtures of any of the above, or a filler selected from silazane treated silica, precipitated silica and fumed silica or mixtures thereof. In one form, the reinforcing filler is a highly reinforcing silica filler with a surface area ranging from about 80 to about 400 square meters/gram (m2/g), or from about 200 to about 400 m2/g. Typically the reinforcing agent is mixed with the vinyl-containing high viscosity organopolysiloxane (1) and low viscosity organopolysiloxane (2) prior to addition of the hydrogen containing polysiloxane copolymer. The reinforcing filler can be employed in the uncured composition in an amount ranging from 10 parts to about 70 parts per 100 parts of the uncured composition, or from 15 parts to about 40 parts, or from about 20 to about 30 parts. In the cured tacky, reinforced cushioning layer, such amounts correspond to about 10% to about 70% by weight, or from about 15% to about 40%, or from about 20% to about 30%. The durometer or hardness of the polysiloxane elastomers disclosed herein can be lowered (i.e. made softer) by incorporating low viscosity polysiloxanes into the uncured composition. Representative low viscosity polysiloxanes include polydimethylsiloxane fluids or vinyl-containing polydimethylsiloxane fluids. The molecular weight average of the plasticizer can range from about 750 to about 30,000. The low viscosity polysiloxanes can be employed in an amount ranging from about zero to about 50% by weight of the uncured composition, or from about 10% to about 30%. The polysiloxane elastomers disclosed herein possess suitable hardness, tensile strength, elongation and tear strength, as based upon standard elastic materials testing. Unreinforced polysiloxane compositions are enclosed in an envelope or other supporting means, i.e. foam impregnation, in order to maintain the shape or durability of an article produced therefrom. In contrast, the high coefficient of friction, tacky, polysiloxane gel pad 30 disclosed herein is viscoelastic and has a measurable hardness, tensile strength, elongation and/or tear strength. Further, the tacky, reinforced polysiloxanes disclosed herein can retain their elastic properties after prolonged action of compressive stresses, a property known as compression set. Compression set is an indicator of durability. According to ASTM Designation: D395-85, Standard Test Methods for Rubber Property Compression Set, pp. 34-35, the actual stressing service may involve the maintenance of a definite deflection, the constant application of a known force, or the rapidly repeated deformation and recovery resulting from intermittent compressive forces. Though the latter dynamic stressing, like the others, produces compression set, its effects as a whole are simulated more closely by compression flexing or hysteresis tests. Therefore, compression set tests are considered to be mainly applicable to service conditions involving static stresses. Tests are frequently conducted at elevated temperatures. In a first method utilizing static stresses, a test specimen is compressed to a deflection and maintained under this condition for a specified time and at a specified temperature. In a second method utilizing static stresses, a specified force is maintained under this condition for a specified time and at a specified temperature. After application of the specified deflection or specified force the residual deformation of a test specimen is measured 30 minutes after removal from a suitable compression device in which the specimen has been subjected for a definite time to compressive deformation under specified conditions. After measurement of the residual deformation, the compression set as specified in the appropriate method is calculated according to ASTM D395-85 equations. When produced in accordance herewith, the gel pad 30 may be prepared to exhibit the following physical properties: a durometer hardness of from about 5 units to about 55 units (Shore 00), a tensile strength of from about 20 psi to about 800 psi, a minimum elongation of from about 250% to about 1100%, a tear strength of from about 5 lb/in to about 200 lb/in, a polyken probe tack of about 10 grams to about 450 grams, a rolling ball tack of about 0 to about 3 inches and a peel test value of from about 0.02 lb/in to about 80 lb/in. The gel pad 30, however, is of course not limited to the above described properties. The gel pad 30 can be prepared using techniques such as molding, liquid injection molding, transfer molding, casting and the like. The gel pad 30 can be preformed into a desired shape for use with the securement system 10 or gel material may be supplied in a sheet form which may be cut to the desired shape prior to use and attached to the body member 20 a. A gel material may also be provided in a kit form, where a catalyst may is provided in a first container and other components are premixed and provided in a second container. In the kit, a mold is provided and the components may be mixed, poured into the mold, and cured. Curing can be with or without heat. Such curing can be achieved by increasing the molecular weight of the uncured polysiloxane elastomers to the extent desired through crosslinking, using heating or standing at ambient temperatures, as described U.S. Pat. No. 3,445,420, the contents of which are hereby incorporated by reference in their entirety. Generally, the degree to which the uncured polysiloxane composition can be partially crosslinked can range from about 30% to about 90%, based upon the alkenyl-containing polysiloxane, or from about 30 to about 60%. In the illustrated embodiment, the gel pad 30 is centered on the body member 20 a, as can be seen in the top view of the securement device 10 in FIG. 2. In this embodiment, the lateral portions 26 a and 28 a of the bottom surface 22 may be used to secure the securement device 10 to a patient. Of course, the gel pad 30 may be positioned in another location besides being centrally located. In one embodiment in which the gel pad 30 is configured to self adhere to the patient, the gel pad 30 may be coextensive with the entire bottom surface 22 of the body member 20 a. As can be seen in a side view of the securement device 10 in FIG. 4, the gel pad 30 protrudes from the body member 20 a. When a medical article is pressed against the gel pad 30, the gel pad will deform at least towards to the body member 20 a and may partially surround or encase at least a portion of the medical article. As can be seen in the side view of the medical device 10 in FIG. 4 and the front view of the medical device 10 in FIG. 5, the gel pad 30 is configured to have a uniform thickness. In other embodiments, the thickness of the gel pad 30 may fluctuate across the length and/or width of the gel pad 30. A medical article can be secured to a patient by the securement device 10, as shown in FIGS. 6 and 7. In the illustrated embodiment, the medical article is a Foley catheter 62 placed on the skin of a patient's leg 64. In the figures, a “longitudinal axis” is generally parallel to a lumen of the medical article. A “transverse axis” is normal to the longitudinal axis and extends in a direction generally parallel to the line shown extending between the securement device 10 and the catheter 62 in FIG. 6. A “lateral axis” extends normal to both the longitudinal and transverse axes. The “longitudinal direction” refers to a direction substantially parallel to the longitudinal axis; “the transverse direction” refers to a direction substantially parallel to the transverse axis; and “the lateral direction” refers to a direction substantially parallel to the transverse axis. After placing the securement device 10 above the catheter 62, as shown in FIG. 6, a medical provider can then lower the securement device 10 over the medial article 62. The medical provider presses the securement device 10 against the patient such that the gel pad 30 presses against the catheter 62 and such that the adhesive on the bottom surface 22 adheres to the skin of the patient's leg 64. The catheter 62 will thus be held on the patient by the securement device 10, as shown in FIG. 7. As can be seen in a cross-section view taken along line 8-8 of FIG. 7, which cross-section view is illustrated in FIG. 8, the gel pad 30 conforms to the shape of an outer surface of the catheter 62. In this way, the gel pad 30 may at least partially encase the catheter 62 without substantially occluding the catheter 62. As described above in relation to the gel pad 30, the gel pad 30 may return to its original shape when removed from contact with the catheter 62. In other embodiments, the gel pad 30 may substantially retain the shape into which it has conformed. In yet other embodiments, the gel or foam retainer may be formed to have a predefined contour therein to accept a medical article of a certain shape. Attaching the catheter 62 to the patient in this way inhibits at least lateral movement of the catheter 62. The catheter 62 is at least partially surrounded by the gel pad 30 and abuts the gel pad 30. The gel pad 30 may be in contact with the patient's skin, which may further inhibit motion of the securement device 10, for example due to a tackiness of the gel pad 30. In addition, transverse motion of the catheter 62 is inhibited by the securement device 10 being adhered to the patient. The securement device 10 may also inhibit longitudinal motion of the catheter 62 when pressed against the medical article. As described above, the gel pad 30 has a tacky property with a high coefficient of friction that inhibits the catheter 62 from sliding longitudinally beneath the securement device 10. In addition, this tacky property will inhibit rotation of the catheter 62. In some embodiments, the portions of the gel pad 30 contacting the patient's skin self adhere to the patient's skin, further securing the medical article on the patient. In the embodiments in which the gel pad 30 is configured to at least partially adhere to the catheter 62, the medical provider may first press the catheter 62 against the gel pad 30 to secure the catheter 62, and then place the combination of the catheter 62 and the securement device 10 onto the skin of the patient, instead of first placing the catheter 62 on the skin of the patient. The securement device 10 can attach a variety of medical articles, singularly or in combination, in position upon a patient. For example, as can be seen in FIG. 9, the securement device 10 can be used to hold a medical article different from that of the Foley catheter 62. In the illustrated embodiment, the securement device 10 is shown as securing a catheter hub 66 to the skin of a patient's arm 68. Due to the flexibility of the body member 20 a and the viscoelasticity of the gel pad 30, the securement device can secure any other number of medical devices as well at any number of different positions on the patient. As will be appreciated by one of skill in the art, the configuration of the securement device 10 allows the securement device 10 to not only secure this variety of medical articles, but also to secure them in a variety of different orientations. In some embodiments, the securement device 10 can be used to hold several medical articles. For example, the gel pad 30 may have a size sufficient to encase several medical articles. In this situation, operation of the securement device 10 is not changed. The securement device 10 can be pressed down over the medical articles to engage the medical articles and adhere to the skin of the patient. The medical articles may or may not be parallel in configuration. In some embodiments, the securement device 10 is configured to semi-permanently attach to the patient. In other embodiments, the securement device 10 is configured to be removable such that the medical article may be adjusted or replaced, such as with a similar medical article, with a medical article of a different size or shape, or with several medical articles. In this embodiment, the medical provider may peel the securement device 10 from the skin of the patient to remove or reposition the medical article. In some embodiments, the gel pad 30 is configured to be separable from the medical article without leaving a residue, for example, without leaving a sticky deposit on the medical article. With reference now to FIG. 10A, an embodiment of a securement device 100 includes a body member 20 b and a plurality of gel pads 30 a and 30 b. The gel pads 30 a and 30 b are attached to the body member 20 b, and in the illustrated embodiment the gel pads 30 a and 30 b are configured with a thickness that protrudes from the body member 20 b. For ease of illustration, the securement device 100 is shown upside down in FIG. 10A. Thus, the gel pads 30 a and 30 b are actually attached to a bottom surface of the body member 20 b. FIG. 10B shows the securement device 100 with a removeable release liner 40 b attached. The release liner 40 b covers the gel pads 30 a and 30 b and adhesive portions of the body member 20 b. In other embodiments, several release liners may be attached to the securement device 100. For example, there may be separate release liners to cover each of the gel pads 30 a and 30 b or to cover separate adhesive portions on the body member 20 b. In the illustrated embodiment, the release liner is configured to have a shape that roughly corresponds to the shape of the body member 20 b, and is longer than the release liner 40 a illustrated in FIG. 1B. The release liner 40 b may otherwise be configured similar to the release liner 40 a. In the illustrated embodiment, the securement device includes two gel pads 30 a and 30 b, which are configured similar to the gel pad 30 described above. In other embodiments, the securement device 100 may include other embodiments of a foam or gel retainer, or both a gel pad and other foam or gel retainer. In some embodiments, the securement device 100 includes more than two gel pads or other foam or gel retainers. The gel pads or other foam or gel retainers may be arranged in any number of configurations on the body member 20 b. The gel pads or other foam or gel retainers may be configured to secure one or more medical articles. As shown in a top view of the securement device 100 in FIG. 11, the body member 20 b is configured in a size and shape such that the two gel pads 30 a and 30 b may be attached to the bottom surface 22 of the body member 20 b. As can be seen in front view of the securement device 100 in FIG. 14, lateral portions 26 b and 28 b of the illustrated embodiment extend beyond the gel pads 30 a and 30 b, respectively, and intermediate portion 29 extends between the gel pads 30 a and 30 b. As will be described in more detail below, the intermediate portion 29 may be configured to contact a portion of a medical article. The intermediate portion 29 may be configured in any number of lengths. In one embodiment, the intermediate portion 29 is configured to accept a portion of a body of a medical article, for example the body of a winged catheter. In other embodiments, the intermediate portion 29 is configured such that two medicals articles, each secured by one of the gel pads 30 a and 30 b, will be spaced apart at desired or a predetermined distance. Any foam or gel retainers attached to the body member 20 b in addition to the gel pads 30 a and 30 b may be separated from each other and/or the gel pads 30 a and 30 b by similarly or differently configured intermediate portions. The bottom surface 22 of the body member 20 b comprises an adhesive at one or more of the lateral portions 26 and 28 b and the intermediate portion 29. In the illustrated embodiment, the bottom surface 22 comprises an adhesive that is coextensive with the body member 20 b. Thus, as illustrated, the bottom surface 22 at all of the lateral portions 26 b and 28 b and the intermediate portion 29 comprise an adhesive. In some embodiments, the bottom surface 22 at the intermediate portion 29 comprises an adhesive configured to adhere to a patient's skin. In some embodiments, the bottom surface 22 at the intermediate portion 29 comprises an adhesive configured to adhere to a medical article, or does not comprise any adhesive. In some embodiments, the bottom surface 22 at one or more of the lateral portions 26 b and 28 b does not comprise an adhesive. As can be seen in a top view of the securement device 100 in FIG. 12 and a side view of the securement device 100 in FIG. 13, the body member 20 b may otherwise be configured similar to the body member 20 a described above. For example, the ends of the lateral portions 26 b and 28 b may be shaped in various configurations or the body member 20 b may comprise a foam or woven material. A medical article can be secured to a patient by the securement device 100, as shown in FIGS. 15 and 16. In the illustrated embodiment, the medical article is shown as a peripherally inserted central catheter (PICC) 102 placed on the skin of the patient's arm 68. The PICC 102 is illustrated as having dual lumens, a body portion 104, and wings 106 a and 106 b projecting laterally from the body portion 104. The method of attaching a medical article to a patient using the securement device 100 is similar to the method of attaching a medical article to a patient using the securement device 10. When contacting the medical article with the securement device 100, however, the gel pads 30 a and 30 b may be arranged in a number of different configurations with respect to the medical article. In the embodiment illustrated in FIG. 16, the securement device 100 is placed laterally over the PICC 102 to cover the wings 106 a and 106 b and a segment of the body portion 104. As can be seen in a cross-section view taken along line 17A-17A of FIG. 16, which cross-section view is illustrated in FIG. 17A, gel pads 30 a and 30 b may conform to the shape of the wings 106 a and 106 b. The gel pads 30 a and 30 b are shown as being compressed in the transverse direction and as surrounding lateral facing surfaces of the wings 106 a and 106 b, which may inhibit at least lateral movement of the PICC 102. In the illustrated embodiment, the body portion 104 is located between the gel pads 30 a and 30 b so as to contact the bottom surface 22 of the body member 20 b at the intermediate portion 29. The bottom surface 22 at the intermediate portion may comprise an adhesive configured to attach to a medical article. Adhering the securement device 100 to the PICC 102 may aid in securement of the PICC 102 and inhibit lateral and/or longitudinal motion of the PICC 102. In other embodiments, the adhesive of the bottom surface 22 at the intermediate portion 29 may be omitted, or the bottom surface 22 at the intermediate portion 29 may be otherwise configured to avoid adhering to the PICC 102. As can be seen in a cross-section view taken along line 17B-17B of FIG. 16, which cross-section view is illustrated in FIG. 17B, gel pad 30 b may surround longitudinally facing surfaces of the wing 106 b. Longitudinally facing surfaces of the wing 106 a may similarly be surrounded by the gel pad 30 a. Such configuration may further inhibit at least longitudinal motion of the PICC 102, for example by placing such longitudinally facing surfaces of the wings 106 a and/or 106 b in abutment with the gel pad 30 a and/or 30 b. In other embodiments, the gel pads 30 a and 30 b may only surround one longitudinally facing surface of one or more of the wings 106 a and 106 b or no longitudinally facing surface. In FIG. 17B, the body member 20 b is illustrated as having a size such that the body member is not in contact with the patient's skin on either side of the longitudinally facing surfaces of the wing 106 b. The body member 20 b, however, may be of a size or shape such that the body member 20 b will extend sufficient beyond the gel pad 30 b to contact the skin of the patient's skin on one or more of these sides. The body member 20 b may be similarly configured with respect to the gel pad 30 a. In other embodiments, the PICC 102 may be arranged such that one or more of the wings 106 a and 106 b contact the body portion 104. Additionally, the securement device 100 could be placed longitudinally over the PICC 102. In such placement, the gel pads 30 a and 30 b may each contact a portion of a lumen of the PICC 102, while the intermediate portion 29 may contact the body portion 104. In yet other embodiments, a single gel pad attached to the body member 20 b could contact the securement device 100, and the single gel pad may have a size sufficient to both laterally and longitudinally surround the securement device 100. These embodiments are merely example configurations of course, and as described above the securement device 100 may be configured with any number of foam or gel retainers and may be placed in a multitude of different configurations to secure many types of medical articles. Use of the securement device 100 to secure such medical articles will not occlude the medical articles, and more than one medical article may be secured at a time. With reference now to FIG. 18A, an embodiment of a securement device 180 includes a body member 20 c and a gel pad 30 c. The gel pad 30 c is attached to the body member 20 c, and in the illustrated embodiment the gel pad 30 c is configured with a thickness that protrudes from the body member 20 c. The gel pad 30 c is configured similar to the gel pad 30, illustrated in FIG. 1A. FIG. 18B shows the securement device 180 with two removeable release liners 40 c and 40 d attached. The release liner 40 c covers the gel pad 30 c and the surface of the body member 20 c to which the gel pad 30 c is attached. The release liner 40 d covers adhesive portions on a surface of the body member 20 c opposite the surface to which the gel pad 30 c is attached. In other embodiments, several release liners may be attached to one or both of the surfaces of the securement device 180. In some embodiments, the release liner 40 c may be smaller so as to be substantially coextensive with the gel pad 30 c. In some embodiments, one or more of the release liners 40 c and 40 d are omitted. In the illustrated embodiment, the release liners 40 c and 40 d are configured to have a shape that roughly corresponds to the shape of the body member 20 c, but in some embodiments the release liner 40 c and/or 40 d may be sized or shaped differently. The release liners 40 c and 40 d may otherwise be configured similar to the release liner 40 a, illustrated in FIG. 1B. The bottom surface 22 of the body member 20 c, shown in a bottom view of the securement device 180 in FIG. 19, comprises an adhesive. As described above in reference to the securement device 10 and the body member 20 a, the body member 20 c may be configured as an adhesive dressing, or an adhesive may be coated onto the bottom surface 22. In the illustrated embodiment, the adhesive is formed over the extent of the bottom surface 22. In other embodiments, the adhesive may only partially cover the bottom surface 22, and may be formed in various patterns and shapes. The adhesive comprises a compound configured to adhere to the skin of a patient, as described above. As can be seen in a top view of the securement device 180 in FIG. 20, the gel pad 30 c is attached to the top surface 24 of the body member 20 c. In the illustrated embodiment, the gel pad 30 c is approximately centered on the body member 20 c. In other embodiments, the gel pad 30 c may be off-centered. In some embodiments, a plurality of gel pads may be attached to the top surface 24. The plurality of gel pads may be configured to secure one or more medical articles. The top surface 24 forms a mounting surface for attachment of other securement devices, as described in more detail below. In some embodiments, a portion of the top surface 24 comprises an adhesive. As can be seen in a side view of the securement device 180 in FIG. 21 and a front view of the securement device 180 in FIG. 22, the body member 20 c is illustrated as being approximately square. In other embodiments, the body member 20 c may be configured in other shapes. The illustrated square shape, however, may be advantageous when attaching other securement devices, as described in more detail below. The body member 20 c may otherwise be configured similar to the body member 20 a, illustrated in FIG. 1A. A medical article can be secured to a patient by using the securement device 180 and another securement device, as shown in FIGS. 23 and 24. In the illustrated embodiment, the medical article 182 includes a lumen configured to transport liquids, and the other securement device is the securement device 10 illustrated in FIG. 1A. In FIG. 23, the securement device 180 is adhered to the patient's arm 68. After placing the medical article 182 above the securement device 180, a medical provider can then lower the medical article 182 onto the securement device 180 such that the medical article 182 is in contact with the gel pad 30 c. Then, the securement device 10 may be lowered over the medial article 182 and pressed against the securement device 180 to attach the securement device 10 to the securement device 180 and secure the medical article 182. The medical article 182 will thus be held on the patient, as shown in FIG. 24. Of course, the medical provider may first attach the medical article 182 to the securement device 10 and then attach the combination of the securement device 10 and the medical article 182 to the securement device 180 in embodiments where the gel pad of the securement device 10 is configured to self-adhere to the medical article 182. As described above, the top surface 24, see FIG. 20, forms a mounting surface. The mounting surface is configured such that a securement device may be attached to thereto. In the illustrated embodiment, the mounting surface is free of adhesives, and comprises a surface on which a securement device may be adhered. The mounting surface may be smooth, glossy, or textured, or otherwise configured such that a securement device may be adhered thereto. In other embodiments, the mounting surface may comprise an adhesive to attach to a securement device placed thereon. The mounting surface may be configured such that an attached securement device may be detached or removed. For example, the mounting surface may be configured as a smooth surface from which the securement device 10 may be peeled without damaging the securement device 180. In some embodiments, a securement device that has been removed from the mounting surface may be reattached. Those skilled in the art will appreciate that repeated removal and reattachment of securement devices and/or medical articles to the mounting surface will not cause discomfort to the patient, and that the mounting surface shield the patient's skin from excoriation. Of course, in some embodiments the mounting surface and/or an adhesive or other attachment feature of an attaching securement device may be configured to permanently attach to the mounting surface. The mounting surface is not limited to attaching or coupling with a securement device using adhesives. For example, the mounting surface may comprise hook and/or loop fasteners configured to engage a securement device. In one embodiment, the mounting surface has snap fasteners configured to engage snap fasteners on a securement device. In some embodiments, a portion of a securement device may be permanently or semi-permanently attached to the mounting surface such that another portion of the securement device can be rotated, folded, or bent over a medical article and secured to the mounting surface. In the illustrated embodiment, the body member 20 c is configured such that the securement device 10 can be secured over the gel pad 30 c in any configuration or rotation. In other embodiments, the body member 20 c is configured in another size or shape that also allows securement of the securement device 10 over the gel pad 30 c in any configuration or rotation. For example, the body member 20 c may be configured in the shape of a circle. In some embodiments, the size and/or shape of the body member 20 c may be more closely matched with the size and/or shape of the securement device 10. In such embodiment, there may be a limited number of configurations for attaching the securement device 10 to the mounting surface and over the gel pad 30 c. As can be seen in a cross-section view taken along line 25-25 of FIG. 24, which cross-section view is illustrated in FIG. 25, both the gel pad 30 of the securement device 10 and the gel pad 30 c of the securement device 180 may conform to the shape of an outer surface of the lumen 182. In this way, the gel pad 30 and the gel pad 30 c may in combination at least partially encase the medical article 182 without substantially occluding the lumen. Securing the medical article 182 in this configuration inhibits motion. For example, lateral, longitudinal, transverse, and/or rotational movement of the medical article 182 may be inhibited in this configuration. Those skilled in the art will recognize that although the securement device 180 is illustrated in combination with the securement device 10 in FIG. 25, other securement devices besides the securement device 10 may be used in combination with the securement device 180 or portions thereof. FIG. 26 illustrates a cross-section view of another combination, in which the gel pad 30 is omitted from the securement device 10 such that only the body member 20 a remains. In this embodiment, the body member 20 a is placed over the medical article 182 and adhered to the securement device 180 such that the gel pad 30 c, which is attached to the securement device 180, is deformed about the medical article 182. In other embodiments, medical tape may be placed over the medical article 182 and adhered to the securement device 180 in a similar fashion. FIG. 27 illustrates a cross-section view of yet another combination, in which the gel pad 30 is omitted from the securement device 180 such that only the body member 20 c remains. In this embodiment, the securement device 10 is placed over the medical article 182 and adhered to the body member 20 c such that the gel pad 30, which is attached to the securement device 10, is deformed about the medical article 182. In one such embodiment, the body member 20 c comprises an anchor pad configured for attachment to the patient's skin. Although the securement device 180 is illustrated as securing a medical article 182 having a tubular shape, the securement device 180 can be used to secure a variety of medical articles, singularly or in combination, in position upon a patient. The securement device 180 and/or another securement device used in combination with the securement device 180 may comprise one or more gel pads. For example, the securement device 180 may be used in combination with the securement device 100. When using the securement device 100 with the securement device 180, the securement device 100 can be adhered to the securement device 180 over a medical article such that the gel pad 30 c approximately aligns with the intermediate portion 29 of the securement device 100. In some embodiments, the gel pad 30 c is configured to self-adhere to a medical article such that a medical article can be secured to the securement device 180 without the need for another securement device. The securement device 180 may be packaged in a kit including one or more other securement devices. With reference now to FIG. 28A, an embodiment of a securement device 280 includes the body member 20 a and a gel pad 30 d. The gel pad 30 d is attached to the body member 20 a. The gel pad 30 d is configured similar to the gel pad 30, illustrated in FIG. 1A, with the exceptions of the gel pad 30 d being thicker and formed with a channel 282 therethrough. For ease of illustration, the securement device 280 is shown upside down in FIG. 28A. Thus, the gel pad 30 d is actually attached to a bottom surface of the body member 20 a. FIG. 28B shows the securement device 280 with a removeable release liner 40 e attached. The release liner 40 e covers the gel pad 30 d and adhesive portions of the body member 20 a. The release liner 40 e is longer than the release liner 40 a illustrated in FIG. 1B to accommodate the increased thickness of the gel pad 30 d, but may otherwise be configured similar to the release liner 40 a. As can be seen in a front view of the securement device 280 in FIG. 29, the channel 282 is illustrated as having a circular cross-sectional shape and extends along the longitudinal axis. In the illustrated embodiment, the channel 282 is configured to accept a tubular medical article, but in other embodiments the channel 282 has any number of shapes and sizes. In addition, the shape or size of a cross-section of the channel may vary along the length of the channel. Additionally, the channel may be formed in a shape which does not follow the longitudinal axis. For example, a curved channel may aid in proper placement of a medical article or may keep the medical article in a configuration that will not interfere with a medical practitioner aiding a patient to which the securement device 280 is attached. A plurality of the gel pads 30 d may be attached to the body member 20 a, or a combination of gel pads with channels and gel pads without channels may be attached to the body member 20 a. A medical article can be secured to a patient by the securement device 280, as shown in FIGS. 30 and 31. In the embodiment shown in FIG. 30, the securement device 280 is illustrated as being placed above the patient's arm 68 with the medical article 182 passing through the channel 282. A medical provider can lower the securement device 280 onto the patient's arm 68 and press the securement device 280 against the patient such that the gel pad 30 d presses against the medical article 182 and such that the adhesive on the body member 20 a adheres to the skin of the patient's arm 68. The medical article 182 will thus be held on the patient by the securement device 280, as shown in FIG. 31. As can be seen in a cross-section view taken along line 32-32 of FIG. 31, which cross-section view is illustrated in FIG. 32, the gel pad 30 d conforms to the shape of an outer surface of the medical article 182. As a result, a variety of medical articles of varying diameter may be accepted within the channel 282 and secured by the securement device 280. Such secured medical articles will be inhibited from moving in at least a transverse direction, and may further be inhibited from moving in a lateral and/or longitudinal direction. In the illustrated embodiment, the area between the gel pad 30 d and the medical article 182 is filled in with a spray material 320. The spray material may comprise spray foam or gel, such as spray memory foam or a spray-in-the-can alginate. The securement device 280 may be packaged in the form of a kit including the spray foam or gel. Of course, the spray material 320 may be omitted when securing a medical article using the securement device 280. A gel material may be used to fill in an area between any of the gel pads 30, 30 a, 30 b, and 30 c and a patient's skin, and/or to fill in an area between any of the gel pads 30, 30 a, 30 b, and 30 c and another gel pad or surface. The securement devices 10, 100, and 180 may similarly be packaged with a spray foam or gel. Various aspects are described above with reference to specific forms or embodiments selected for purposes of illustration. It will be appreciated that the spirit and scope of the disclosed securement system is not limited to the selected forms. Moreover, it is to be noted that the figures provided are not drawn to any particular proportion or scale, and that many variations can be made to the illustrated embodiments. Thus, although the system has been disclosed in the context of certain preferred embodiments and examples, it will be understood by those skilled in the art that the present invention extends beyond the specifically disclosed embodiments to other alternative embodiments and/or uses of the invention and obvious modifications and equivalents thereof. In addition, while a number of variations have been shown and described in detail, other modifications, which are within the scope of this invention, will be readily apparent to those of skill in the art based upon this disclosure. It is also contemplated that various combinations or sub-combinations of the specific features and aspects of the embodiments may be made and still fall within the scope of the invention. Accordingly, it should be understood that various features and aspects of the disclosed embodiments can be combined with or substituted for one another in order to form varying modes of the disclosed invention. Thus, it is intended that the scope of the present invention herein disclosed should not be limited by the particular disclosed embodiments described above, but should be determined only by a fair reading of the disclosure and the claims that follow. All patents, test procedures, and other documents cited herein, including priority documents, are fully incorporated by reference to the extent such disclosure is not inconsistent with this invention and for all jurisdictions in which such incorporation is permitted. a resilient retainer formed from a soft, tacky elastomeric gel or an elastomeric foam and being supported by said body, said resilient retainer having a channel extending along a longitudinal axis, said channel being adapted for receiving and securing a medical device, the medical device being secured to the skin of a patient upon affixing said bottom surface to the patient via said adhesive compound, wherein said elastomeric gel is formed by curing an organopolysiloxane composition. 2. The securement device of claim 1, wherein said medical device is a catheter. 3. The securement device of claim 1, wherein said channel is preformed into a desired shape for receiving and securing the medical device. 4. The securement device of claim 3, wherein a space between said channel and the medical device is filled with a gel or foam. 5. The securement device of claim 4, wherein said gel or foam comprises spray gel or foam. wherein a molar ratio of hydrogen to vinyl radicals in the total composition is less than 1.2, such that after curing, the degree to which said tacky, reinforced polysiloxane elastomer is partially crosslinked is about 30 to about 90%. 7. The securement device of claim 6, wherein the organopolysiloxane composition has a hardness of about 5 to about 55 durometer units (Shore 00), a tackiness of about 0 to about 450 grams as determined by a polyken probe tack tester or about 0 to about 7.6 cm as determined by a rolling ball tack tester and a tensile strength of about 0.14 to about 5.52 mega Pascals, a minimum elongation of about 250 to about 1100 percent and a tear strength of about 0.8 to about 35.2 kN/m. wherein the molar ratio of hydrogen to alkenyl radicals in the total uncured composition is less than 1.2, such that after curing, the degree to which the soft, tacky, reinforced polysiloxane elastomer is partially crosslinked is about 30 to about 90%. 9. The securement device of claim 1, wherein said elastomeric foam is a memory foam comprising polyurethane. securing the securement device and medical device to the patient with the body via the adhesive compound. 11. The method of claim 10, wherein the medical device is a catheter. 12. The method of claim 10, wherein the channel has a size and shape to receive at least a portion of the medical device. 13. The method of claim 12 further comprising filling a space between the channel and the medical device with a gel or foam. 14. The method of claim 13, wherein the gel or foam comprises spray gel or foam. 16. The method of claim 15, wherein the organopolysiloxane composition has a hardness of about 5 to about 55 durometer units (Shore 00), a tackiness of about 0 to about 450 grams as determined by a polyken probe tack tester or about 0 to about 7.6 cm as determined by a rolling ball tack tester and a tensile strength of about 0.14 to about 5.52 mega Pascals, a minimum elongation of about 250 to about 1100 percent and a tear strength of about 0.8 to about 35.2 kN/m. 18. The method of claim 10, wherein said elastomeric foam is a memory foam comprising polyurethane. a tacky gel pad supported by the flexible body member and configured to form a channel extending along a longitudinal axis when pressed against a medical article, the gel pad inhibiting at least lateral and longitudinal motion of the medical article when the flexible body member is attached to the patient, wherein said gel pad is formed by curing an organopolysiloxane composition. 20. The securement system of claim 19, comprising a plurality of tacky gel pads configured to deform when pressed against the medical article. 21. The securement system of claim 19 further comprising an anchor pad having a top surface and a bottom surface, the top surface comprising a mounting surface configured for attachment to the flexible body member, the bottom surface comprising an adhesive configured for attachment to the patient's skin. 22. The securement system of claim 21, comprising a plurality of flexible body members and a plurality of gel pads, wherein the mounting surface is configured for attachment to the plurality of flexible body members. 23. The securement system of claim 21, wherein the flexible body member is releasably attached to the mounting surface. 24. The securement system of claim 19, wherein the tacky gel pad is attached to the first surface of the flexible body member. 25. The securement system of claim 19, wherein the tacky gel pad is attached to the second surface of the flexible body member. a resilient retainer formed from a soft, tacky elastomeric gel or an elastomeric foam and being supported by said body, said resilient retainer having a channel extending along a longitudinal axis, said channel being adapted for receiving and securing a medical device, the medical device being secured to the skin of a patient upon affixing said bottom surface to the patient via said adhesive compound, wherein said elastomeric foam is a memory foam comprising polyurethane. Search Result, Percufix® Catheter Cuff Kit, downloaded from the Internet on Aug. 15, 2001.
2019-04-19T05:12:40Z
https://patents.google.com/patent/US8394067B2/en
Iran-US Claims Tribunal, Phillips Petroleum Co. Iran v. Iran et al., 21 IRAN-U.S. C.T.R., at 79 et seq. Document-Id: 232300, Please cite as: "https://www.trans-lex.org/232300" 1. The Claims in this Case were brought by Phillips Petroleum Company Iran, a Delaware corporation, ("the Claimant") for compensation for the alleged taking in 1979 by the Respondent Islamic Republic of Iran ("Iran") of the Claimant's rights under a 1965 contract with the Respondent National Iranian Oil Company ("NIOC") for the exploration and exploitation of the petroleum resources of a certain area offshore in the Persian Gulf ("Joint Structure Agreement" or "JSA") and for damages for the alleged breach and repudiation of the same contract, also in 1979. These two Claims are stated in the alternative, except to the extent that damages are sought for breaches allegedly occurring prior to the date of the alleged taking, which the Claimant asserts occurred on 29 September 1979. The Claimant seeks U.S. $162,716,108, plus interest and costs. 2. The Respondents have presented seven counterclaims. The first, for damages for alleged bad oil field practices, is divided into seventeen separate sub-claims. The second counterclaim is for damages for alleged breach of contract by the Claimant in the preparation and submission of commerciality reports on the two oil fields discovered in the area covered by the Joint Structure Agreement. The third counterclaim is for money allegedly owed by Phillips Petroleum 82 Company ("Phillips"), the parent corporation of the Claimant, for crude oil purchased from NIOC under a contract dated 19 June 1979. The fourth counterclaim is for money allegedly owed by the Claimant to Iranian Marine International Oil Company ("IMINOCO"), the operating company established by the parties to the Joint Structure Agreement, in connection with the provision of services by the Claimant to IMINOCO. The fifth counterclaim is for damages for alleged breach of contract for the sale by Phillips to IMINOCO of certain goods. The sixth counterclaim is for various taxes allegedly due from the Claimant and IMINOCO and for 1978 Stated and Additional Payments allegedly due to NIOC. The seventh counterclaim is for indemnification by the Claimant of the Respondents for one-half of any amounts awarded by the Tribunal in other cases as liabilities of IMINOCO to the claimants in those cases. The total amount sought on the counterclaims is U.S. $1,221,475,954, plus interest. 89. The principal events which the Claimant associates with the taking of its property interests occurred during 1979. The Claimant asserts that the alleged expropriation did not result from any public government decrees, but rather from concerted actions of the Government of Iran, often operating through NIOC,26 which effectively deprived the Claimant of its property. 90. The record shows that termination of the JSA relationship was heralded during the days immediately preceding and following the return of Imam Khomeini to Iran on 1 February 1979. Leading members of the Revolutionary movement announced that the first step of the new Government would be the revocation of oil contracts and the taking back of oil from the hands of the multinationals in order to realize a true nationalization of oil and in order to make the oil industry an integral part of the Iranian economy. The announcements of the intentions of the new leadership were repeated following the installation of the Revolutionary Government in mid-February 1979. On 14 February, Abdolhasan Bani Sadr, who later became President, declared that the nationalization of the oil industry would be Iran's first step to transforming the economy and that oil would be fully "integrated with the Iranian economy". On 28 February, the New York Times quoted a spokesman for NIOC as saying that Iran would probably nationalize all joint production ventures with foreign companies and that the number of foreign experts in the oil industry would be limited to one-fifth of the number prior to the Revolution. 91. The first concrete nationalization action was taken against the Consortium, which was by far the largest Iranian oil producer. On 10 March, NIOC sent the Consortium members a letter repudiating the Consortium agreement and stating that, in the future, the members of the Consortium could obtain oil from Iran only by purchase from 113 NIOC.27If there was any doubt that such action represented the policy of the new Government, that was dispelled in early April when public statements by the Minister of Economic Affairs and Finance and by the Governor of the Central Bank referred to Iran's being "free from obligations to the Consortium" and to the export of "the first consignment of our now entirely nationalized oil". 92. The first concrete actions concerning the Claimant's JSA rights were taken with respect to the oil itself following resumption of production in March 1979. NIOC unilaterally set the production rates at levels significantly below those prevailing prior to the Revolution. Despite oral requests by the Claimant during April and May to be permitted to lift petroleum, all petroleum produced by the fields was lifted by NIOC. While the Claimant apparently did not make any formal "nominations" to lift oil, the evidence is convincing that it was informally requesting from its NIOC partner permission to do so. No compensatory payment was made for the Claimant's share, even though this was contemplated in such circumstances by the JSA and suggested by the Claimant in the Second Party's letter of 26 June 1979 to NIOC.28 Indeed, NIOC only provided petroleum on the basis of a separate sales contract, which it concluded with the Claimant's parent corporation on 19 June 1979, despite the JSA provision that petroleum produced was "owned at the well head" 50 percent by the First Party and 50 percent by the Second Party. That the Government of Iran had decided soon after assuming power that all sales of oil produced in the country must be made by NIOC notwithstanding existing arrangements seems clear in retrospect from the events, and confirming evidence was presented in this Case in the form of an internal memorandum dated 11 July 1979 of decisions made with respect to an unrelated project by the National Petrochemical Company. That memorandum referred to the "Government's policy that all sales of hydrocarbons produced in the country must be made by NIOC". 93. Confirmation of this governmental policy is found in the Official Gazette No. 10066, dated 13 September 1979 which published Notice No. 52866, dated 18 August 1979, relating to the budget for the year 1358 (21 March 1979 - 20 March 1980). Note 38 states, in part: "Oil 114 sale contracts shall be signed by the National Iranian Oil Company on behalf of the Government. The sale proceeds of crude oil, in any form, and that of exported oil products, shall be directly deposited in the account of the General Treasury in the Bank Markazi." On 23 May, Imam Khomeini received certain NIOC staff members and was quoted in the Tehran press as saying that the foes of Islam had had their hands cut off Iranian oil resources which "are in your own hands". Furthermore, at the end of 1979, the Minister of Petroleum was quoted in the press as stating that "After the Revolution, practically we have not delivered a drop of oil to the second party". In this context, it is also noted that the Law for the Protection and Expansion of Industries adopted by the Iranian Revolutionary Council on 5 July 1979 stated that the petroleum industry had already been nationalized, and that on 9 July, Prime Minister Bazargan was quoted in the Tehran press as saying the same thing. 94. Other actions affecting the Claimant's rights in IMINOCO began in May 1979. On 29 May 1979, the Managing Director of NIOC appointed a committee of seven persons to "supervise and execute the affairs of the affiliated companies until the situation of their contracts are clarified . . .". NIOC later dismissed the Managing Director appointed by the Second Party, a right reserved to the Second Party by the JSA, and vested executive authority in its own appointee. Information regarding operations of IMINOCO, principally production reports, ultimately ceased being sent to the Claimant in September. 95. A third set of actions, aimed at termination of the JSA arrangement as a whole, also commenced in the Spring of 1979. Several meetings were held in connection with the negotiations of the sale/purchase agreement noted above. These discussions were linked by NIOC to termination of the JSA and settlement of any issues arising therefrom. Ultimately, NIOC appointed in August 1979 a sub-commission of its Board of Directors, headed by Mr. Khalili, to terminate all of the JSAs and to negotiate new arrangements with each of the former partners. This sub-commission met with the IMINOCO Second Party on 29 September 1979 and notified them at that time that their JSA was terminated. Settlement terms remained linked by NIOC to the opportunity to purchase oil from NIOC in the future. 96. The state of affairs thus reached over the course of 1979 was confirmed during 1980 and thereafter, particularly by promulgation of the Single Article Act in January 1980 and the written notification of the "nullification" of the JSA made in August 1980. This written notification, which emanated from the Ministry of Petroleum and NIOC 115 and not from the Special Commission, explicitly confirmed the oral notice of termination given by NIOC during 1979, i.e. before the Special Commission was formed. It thus served as little more than ratification of the actions taken during 1979. While assumption of control over property by a government does not automatically and immediately justify a conclusion that the property has been taken by the government, thus requiring compensation under international law, such a conclusion is warranted whenever events demonstrate that the owner was deprived of fundamental rights of ownership and it appears that this deprivation is not merely ephemeral. The intent of the government is less important than the effects of the measures on the owner, and the form of the measures of control or interference is less important than the reality of their impact. Tippetts, Abbott; McCarthy, Stratton, supra, at p. 11. Therefore, the Tribunal need not determine the intent of the Government of Iran; however, where the effects of actions are consistent with a policy to nationalize a whole industry and to that end expropriate particular alien property interests, and are not merely the incidental consequences of an action or policy designed for an unrelated purpose, the conclusion that a taking has occurred is all the more evident. 98. Although a government's liability to compensate for expropriation of alien property does not depend on proof that the expropriation was intentional, there seems little doubt in this Case that the new Islamic Republic intended to bring the JSA to an end and to place NIOC fully in charge of all oil production and sales. Even though it can readily be observed that NIOC made equivocal statements during 1979 regarding the timing and the terms for termination of the JSA, the refusal to permit the Claimant to exercise any rights under the JSA is more relevant to such a finding than any of these pronouncements. Notwithstanding the ambiguity of some of these statements and the Claimant's continued efforts to arrive at an agreed solution of the problems with the JSA, there is in this Case no evidence of any such agreed termination of the JSA nor of a waiver by the Claimant of its rights under that Agreement (as the Tribunal found in the Consortium Cases based on the evidence there). 99. The effects of Iran's actions on the Claimant's JSA rights can be summarized succinctly. Whereas the First and Second Parties jointly operated the offshore petroleum fields involved in this Case and shared 116 50-50 the crude petroleum produced by the fields prior to the events of 1979, thereafter the Claimant and the other Second Party companies no longer participated in joint operation of the fields, no longer received their share of the petroleum being produced, and were told by Iran that their agreement had been terminated and nullified. These changes resulted from the actions of Iran summarized above, which totally excluded the Second Party from any of its functions under the JSA. 100. The conclusion that the Claimant was deprived of its property by conduct attributable to the Government of Iran, including NIOC, rests on a series of concrete actions rather than any particular formal decree, as the formal acts merely ratified and legitimized the existing state of affairs. The Claimant suggests that the taking was complete by 29 September 1979, the date of the meeting when it was informed of the termination of the JSA. The Respondents contend that 11 August 1980, the date of the written notification informing the Claimant that the Special Committee had declared the JSA null and void, is the only date when the taking could be said to have been complete. 101. The Tribunal is not bound by the suggestions of the Parties in determining the date of taking for purposes of liability, but rather must determine such date on its own, based on the facts of the case. The Tribunal has previously held that in circumstances where the taking is through a chain of events, the taking will not necessarily be found to have occurred at the time of either the first or the last such event, but rather when the interference has deprived the Claimant of fundamental rights of ownership and such deprivation is "not merely ephemeral",29 or when it becomes an "irreversible deprivation".30 Similarly, where the appointment of temporary managers ripened into a taking of title at a later date, the Tribunal found that the earlier date should be used when "there is no reasonable prospect of return of control". Sedco, Inc. v. National Iranian Oil Company, Interlocutory 117 Award No. ITL 55-129-3 (28 October 1985), at p. 42.31 The Tribunal has observed that an important objective of the Revolutionary movement - and a first order of business of the new Government - was the assumption of complete control over all aspects of the oil industry, notwithstanding existing joint ventures with foreign oil companies. The first and most immediate action against the property rights at issue, the refusal, in line with this policy, to permit the Claimant to take its liftings under the JSA, started after production from the JSA fields had resumed in March 1979. The final formal "nullification" in August 1980 of the JSA only confirmed the then existing state of affairs. Between these two dates, the Tribunal considers that an early date is appropriate. 102. The Tribunal notes that the Claimant's loss was felt from the time of the first refusals to permit it to lift petroleum in April 1979. At that time the Claimant was still uncertain whether that situation was to be permanent, and NIOC first indicated that it would at some later time be willing to discuss the Claimant's request concerning its 1979 liftings. When no such discussions ensued, the Claimant's parent company felt compelled to enter, on 19 June 1979, into a separate sales/purchase agreement for crude oil with NIOC. But the Claimant still proposed, together with the other Second Party companies in their letter of 26 June, a provisional arrangement for liftings through the rest of that year which was based on the JSA and the rights under that agreement, and which they were waiting to discuss in the separate meeting envisioned by NIOC in the April general meeting. On 30 June, cash calls to the Claimant ceased. While the cessation of cash calls showed that IMINOCO did in fact no longer operate as provided for in the JSA, the Second Party companies still based their disagreement to the dismissal on 1 August of the Second Party's Managing Director on "the existing contractual arrangement", viz., the JSA, when AGIP requested an early meeting of the Board of Directors on the matter. It became clear, however, in the meeting which the IMINOCO Second Party companies had with the Khalili sub-commission on 29 September 1979 that there was no reasonable prospect of return to an arrangement with NIOC on the basis of the JSA. For it was in this meeting that the Second Party companies were told not only that they should regard the JSA as terminated, but also that their letter of 26 June did not deserve an answer. Consequently, the Tribunal finds that the Claimant's JSA rights 118 were taken by 29 September 1979, and that the Respondents are liable to compensate the Claimant for its loss as of that date. 103. The Tribunal has consistently held that the applicable law for the purpose of determining the compensation owed by the Islamic Republic of Iran for deprivations or takings of property of United States nationals during the years immediately prior to the Algiers Accords is the 1955 Treaty of Amity.32 See, for example, Phelps Dodge Corp. and Overseas Private Investment Corp. , supra; Thomas Earl Payne v. The Government of the Islamic Republic of Iran, Award No. 245-335-2 (8 August 1986), reprinted in 12 IRAN-U.S. C.T.R. 3; Sedco, Inc., supra; Amoco International Finance Corporation, supra; and Starrett Housing Corporation, supra. The Tribunal has recognized that the Treaty of Amity, whether or not it remains in force today between the two States, was in force in 1979 and 1980 and clearly was applicable to the investments at issue in these Cases at the times the claims arose.33 Therefore, the Treaty of Amity is the relevant source of law on which the Tribunal is justified in drawing in reaching its decision. Property of nationals and companies of either High Contracting Party, including interests in property, shall receive the most constant protection and security within the territories of the other High Contracting Party, in no case less than that required by international law. Such property shall not be taken except for a public purpose, nor shall it be taken without the prompt payment of just compensation. Such compensation shall be in an effectively realizable form and shall represent the full equivalent of the property taken; and adequate provision shall have been made at or prior to the time of taking for the determination and payment thereof. 105. That contract rights, such as those taken by the Respondents in the present Case, are "interests in property" protected by the Treaty of Amity is clear from the above-quoted text and from the negotiating history of the provision, which indicates that the reference to "interests 119 in property" was included at the insistence of the United States for the stated purpose of ensuring that contract rights in the petroleum industry would be protected by the Treaty in the same way as would the older type of property represented by a petroleum concession. 106. Thus, the Claimant is entitled by the Treaty to "just compensation", representing the "full equivalent of the property taken". As the Tribunal has previously held, where the property taken was a "going concern", compensation that meets the Treaty standard is compensation that makes the Claimant whole for the "fair market value" of the property at the date of taking. See the Thomas Earl Payne, Sedco and Starrett Awards, supra. In the present Case, the Claimant argues that its JSA rights constituted part of a "going concern", whereas the Respondents argue that, since the JSA had been frustrated, no such "going concern" remained that could have been taken. That the Claimant's JSA contract rights, which the Tribunal has found continued to exist until they were taken by the Respondents in September 1979, were part of a "going concern" is demonstrated by the history described above and, in particular, by the fact that the wells, platforms, pipelines, and storage facilities covered by the JSA produced petroleum from the JSA fields both before and after the taking in 1979, except for a few months in late 1978 and early 1979 when they were shut down as a result of strikes and violence related to the culmination of the Islamic Revolution. That the Claimant's JSA rights were not, by themselves, "a going concern", but were only part of a "going concern", follows on the other hand from their nature and the way they were granted and defined by that Agreement. As described in detail above, the Claimant's rights under the JSA were first, to participate in the management of IMINOCO, the operating company set up together with the two other Second Party companies and NIOC, and thereby in the production of petroleum from the area covered by the Agreement, and second, to take its share of the petroleum so produced and to export it. The consequence of this contractual situation was that the lifting of the petroleum to which the Claimant was entitled depended an IMINOCO first producing petroleum in accordance with the JSA, and further that the Claimant's influence on the volume of such production was determined by the extent to which the JSA granted the Claimant participation in the management of IMINOCO. In taking these contract rights of the Claimant (and those of the other Second Party companies) in 1979, the Respondents took complete control over a going concern to the exclusion of the Claimant's (and the other Second Party companies') interest therein and appropriated to themselves the entire 120 benefit from that going concern, including that part to which the Claimant was entitled by virtue of the JSA. As a result, the Claimant is entitled to compensation equivalent to the fair market value of the Claimant's interest in the JSA on the date of taking. 107. As far as the standard of compensation is concerned, the Respondents have argued that the Treaty of Amity must be interpreted in the light of changes in customary international law which, they assert, have taken place since the Treaty was signed in 1955. They point to the reference in the above-quoted Treaty provision to "international law", and to a general international law principle of "dynamic" interpretation of treaties. They assert that customary international law as it exists today does not require compensation for expropriated property that is the "full equivalent" of the property, and that this is especially so in cases of large-scale nationalizations involving a State's natural resources. In that context they point to the statement in INA Corporation v. The Government of the Islamic Republic of Iran, Award No. 184-161-1 (13 August 1985) at p. 8, reprinted in 8 IRAN-U.S. C.T.R. 373, 378, that "In the event of such large-scale nationalizations of a lawful character, international law has undergone a gradual reappraisal, the effect of which may be to undermine the doctrinal value of any 'full' or 'adequate' (when used as identical to 'full') compensation standard as proposed in this case" (footnote omitted), and more particularly to judge Lagergren's discussion of that reappraisal in his Separate Opinion in that case.34 However, the Tribunal need not express any view as to the asserted changes in customary international law, or the relevance of such law to a 1979 taking of property. First, the text of the Treaty provision does not support the Respondents' argument. The reference to international law is found in the first sentence of Article IV, paragraph 2, and its meaning is evident. It provides that the protection and security to be received by the property of nationals of one State within the other must be "most constant . . . and in no case less than that required by international law". This reference to international law clearly relates to 121 the standard of "most constant protection and security" set forth in the same sentence and cannot be understood as modifying the taking and compensation requirements of the second and third sentences of that paragraph, which contain no reference to international law and which clearly and completely describe the requirements for takings and compensation. Concerning the argument that treaties generally should be interpreted in the light of customary international law as it may evolve, the Tribunal has already found in the INA award that the Treaty of Amity as a lex specialis prevails in principle over general rules. This is certainly the case for the Treaty's compensation provisions the purpose of which would otherwise be difficult to ascertain. 108. The Respondents also assert that compensation should be based on the net book value of the property taken and point in support of that assertion to a series of settlements in the global petroleum industry in recent decades which, they assert, demonstrate that both nations with petroleum reserves and companies engaged in finding and extracting those reserves accept net book value as an appropriate basis for compensation. The Tribunal notes, however, that such settlements are usually confidential and appear frequently to involve additional considerations, such as continued access to petroleum resources, so that the true compensation may be difficult to identify. As observed by the distinguished tribunal in Kuwait v. American Independent Oil Company (AMINOIL), (Reuter, Sultan, and Fitzmaurice Arbitrators, Award of 24 March 1982) at paragraph 157, reprinted in 66 International Law Reports (1984) at p. 606, such settlements do not constitute an opinio juris. In any event, such settlements are irrelevant to the applicable law in the present Case, that is the standard of compensation set forth in the Treaty of Amity. 109. The Respondents further argue that the taking of property in the present Case was a lawful taking, and that for such a taking, a lesser standard of compensation is required. The Claimants deny that the taking was lawful and further deny that a lesser standard of compensation is applicable to lawful takings. However, the Tribunal need not decide in the present Case whether the taking was unlawful, for instance, as violative of stabilization clauses or for any other reason, because, whatever the relevance of that question as a matter of customary international law, it is irrelevant under the Treaty of Amity. Article IV, paragraph 2, quoted above, provides a single standard, "just compensation" representing the "full equivalent of the property taken", which applies to all property taken, regardless of whether that taking was lawful or unlawful. Clearly, as the Amoco International 122 Finance Award, supra, recognizes, that standard applies to takings that are "lawful" under the Treaty, but the Treaty does not say that any different standard of compensation would be applicable to an "unlawful" taking. The Treaty states two requirements for any taking, that it be for a public purpose and that "just compensation", as defined therein, be paid promptly. In the present Case, there is no allegation that the taking, which extended to all petroleum production in Iran, was not for a public purpose, and the Claimant requests no more than "just compensation" based on the single standard of the Treaty. 110. The Tribunal believes that the lawful/unlawful taking distinction, which in customary international law flows largely from the Case Concerning the Factory at Chorzow (Claim for Indemnity) (Merits), P.C.I J. Judgment No. 13, Ser. A., No. 17 (28 September 1928), is relevant only to two possible issues: whether restitution of the property can be awarded and whether compensation can be awarded for any increase in the value of the property between the date of taking and the date of the judicial or arbitral decision awarding compensation. The Chorzow decision provides no basis for any assertion that a lawful taking requires less compensation than that which is equal to the value of the property on the date of taking. In the present Case, neither restitution nor compensation for any value other than that on the date of taking is sought by the Claimant, so the Tribunal need not determine whether such remedies would be available with respect to a taking to which the Treaty of Amity applies. 111. The Tribunal recognizes that the determination of the fair market value of any asset inevitably requires the consideration of all relevant factors and the exercise of judgment. In the absence of an active and free market for comparable assets at the date of taking, a tribunal must, of necessity, resort to various analytical methods to assist it in deciding the price a reasonable buyer could be expected to have been willing to pay for the asset in a free market transaction, had such a transaction been possible at the date the property was taken. Any such analysis of a revenue-producing asset, such as the contract rights involved in the present Case, must involve a careful and realistic appraisal of the revenue-producing potential of the asset over the duration of its term, which requires appraisal of the level of production that reasonably may be expected, the costs of operation, including taxes and other liabilities, and the revenue such production would be 123 expected to yield, which, in turn, requires a determination of the price estimates for sales of the future production that a reasonable buyer would use in deciding upon the price it would be willing to pay to acquire the asset. Moreover, any such analysis must also involve an evaluation of the effect on the price of any other risks likely to be perceived by a reasonable buyer at the date in question, excluding only reductions in the price that could be expected to result from threats of expropriation or from other actions by the Respondents related thereto. 112. One such method of analysis, and the method used by the Claimant, is the Discounted Cash Flow ("DCF") analysis, which calculates the Claimant's prospective net earnings over the term of the JSA and discounts them to give their value at the date of taking, using a discount rate that takes into account the perceived risks. In that connection, the Tribunal does not understand the Claimant's calculations of anticipated revenues from the JSA as a request to be awarded lost future profits, but rather as a relevant factor to be considered in the determination of the fair market value of its property interest at the date of taking. The Tribunal recognizes that a prospective buyer of the asset would almost certainly undertake such DCF analysis to help it determine the price it would be willing to pay and that DCF calculations are, therefore, evidence the Tribunal is justified in considering in reaching its decision on value. In Starrett, supra, the Tribunal based its Award on an expert's report that utilized the DCF method, but the Tribunal made various adjustments to the conclusions and the resulting amounts. The need for some adjustments is understandable, as the determination of value by a tribunal must take into account all relevant circumstances, including equitable considerations.35 While a DCF analysis can, and often should be, an essential and even central component in that determination of value, it must not exclude other relevant considerations. In this connection, the Tribunal notes that in Amoco International Finance, supra, Chamber Three considered the DCF method inadequate and distinguished between the assets of a going concern, including good will and commercial prospects, which it noted are closely linked to the profitability of the concern, and what it described as the "financial capitalization of the revenues which might be generated by such a concern . . .". 113. In the present Case, the property taken is not a manufacturing or processing enterprise, but rather contract rights to continue to 124 exploit natural resources previously discovered pursuant to the contract, and the Tribunal considers the use of the DCF method by the Claimant a relevant contribution to the evidence of the value of the Claimant's contract rights which have been taken by the Respondents. However, the Tribunal agrees that it is not an exclusive method of analysis and that all relevant considerations must be taken into account. As used by the Claimant, with its production and price estimates and a very low discount rate (four and one-half percent), the Tribunal cannot agree that the method has resulted in a proper estimate of market value. There are, for example, risks, such as the risk of reduced future production as a result of national policy changes flowing from the Iranian Revolution, that should be taken into account, even if such risks cannot be quantified with any certainty in the anticipated production or as part of a discount rate. The Tribunal therefore proceeds to its determination of the value of the Claimant's property interest on the date of taking by means of consideration of all relevant circumstances as revealed by the evidence presented in the Case. 114. In this connection, the Tribunal does not intend to make its own DCF analysis with revised components, but rather to determine and identify the extent to which it agrees or disagrees with the estimates of both Parties and their experts concerning all of these elements of valuation. 115. Another method which can help the Tribunal verify its findings concerning the value of the Claimant's JSA interests is to value the tangible investments made by the Claimant under the JSA as well as the Claimant's intangible assets, including the profitability of its share of the going concern, and to deduct from these total assets the Claimant's liabilities. While this method also values the revenue-producing potential of the Claimant's JSA interests it puts more emphasis an actual investments and past performance as a basis for the assessment of expected profitability than on forecasts of expected cash flows. This method, which might be described as on underlying asset valuation approach, first calculates the tangible assets at their depreciated replacement value, thereby adjusting book value which the Respondents, in its net form, have put forward as their preferred measure of compensation for this Case. In order to quantify the intangible assets including profitability of the property interests taken, an appropriate income figure is determined based on historic earnings, to which a multiple is applied, which takes into account legitimate expectations in an oil venture of this type generally and in the context. of the JSA more particularly. 116. The Tribunal is mindful that, as in any other case, its findings and conclusions are determined by the applicable law and the particular circumstances of this Case. With regard to the standard of compensation, the Tribunal has pointed out, supra, that it applies the lex specialis of the Treaty of Amity and that it need not therefore make any finding with respect to customary international law. Similarly, with respect to the methods of valuation, the Tribunal has used methods it considers appropriate in light of all the issues and evidence in this Case, including the nature of the contractual arrangements represented by the JSA, and the Tribunal makes no finding with respect to the valuation of other types of contracts or other types of property. 1The following note, signed by Mr. Briner and Mr. Aldrich, is appended to the Award: "Having fully participated in the deliberation of the Case and having been informed of the time when the Final Award would be signed at the Tribunal, Mr. Khalilian was present but declined to sign. In these circumstances we conclude that the Tribunal is justified, and in fact obligated, by international law and precedent to proceed with the signature of the Award. Any other conclusion, in a continuing tribunal of this type with many cases on its docket would permit the Tribunals work to be sabotaged. This statement is made pursuant to Article 32, paragraph 4, of the Tribunal Rules." See also Separate Statement of Mr. Briner, p. 240 below. 2Statement by judge Khalilian as to Why it would have been Premature to Sign the Award, see p. 194 below. See also Supplemental Statements at pp. 245, 263 and 277 below. 3Concurring Opinion, see p. 162 below. See also Supplemental Statement at p. 256 below. 4Filed 29 June 1989; Persian version not filed: see Additional Documents, p. 305 below, passim, and Award on Agreed Terms No. 461-39-2, p. 285 below, declaring the English version of the Award to be deemed by the Parties as null and void. 26The Full Tribunal observed in the Oil Field of Texas case that it is "clear that NIOC is one of the instruments by which the Government of Iran conducted and currently conducts the country's national oil policy". Oil Field of Texas, Inc. v. The Government of the Islamic Republic of Iran, Interlocutory Award No. ITL 10-43-FT (9 December 1982) at p. 14, reprinted in 1 IRAN-U.S. C.T.R. 347, 356. See also, Mobil Oil Iran, supra, at p. 38. International law recognizes that a State may act through organs or entities not part of its formal structure. The conduct of such entities is considered as an act of the State when undertaken in the governmental capacity granted to it under the internal law. See Article 7(2) of the Draft Articles on State Responsibility adopted by the International Law Commission, Yearbook International Law commission 2 (1975), at p. 60. The 1974 Petroleum Law of Iran explicitly vests in NIOC "the exercise and ownership right of the Iranian nation on the Iranian Petroleum Resources". NIOC was later integrated into the newly-formed Ministry of Petroleum in October 1979. 27The text is quoted in Mobil Oil Iran, Inc., supra, at para. 120. 28While according to IMINOCO's Statutes the Second Party companies could have requested an extraordinary general meeting on the issue of future liftings, they apparently tried to arrive at a negotiated solution with NIOC. This did not mean, however, that they acquiesced in the situation, and in fact no new agreement replacing the JSA could be reached. See supra, paras. 35 ff. 29Tippetts, Abbett, McCarthy, Stratton, supra, at p. 11. 30In International Technical Products Corporation v. The Government of the Islamic Republic of lran, Award No. 196-302-3 (28 October 1985) at p. 49, reprinted in 9 IRAN-U.S. C.T.R. 206, 240-241, the Tribunal held: Where the alleged expropriation is carried out by way of a series of interferences in the enjoyment of the property, the breach forming the cause of action is deemed to take place on the day when the interference has ripened into more or less irreversible deprivation of the property rather than on the beginning date of the events. The point at which interference ripens into a taking depends on the circumstances of the case and does not require that legal title has been transferred. (Footnote omitted). 319 IRAN-U.S. C.T.R. 248 at 278-9. 32Treaty of Amity, Economic Relations, and Consular Rights Between the United States of America and Iran, signed 15 August 1955, entered into force 16 June 1957, 284 U.N.T.S. 93, T.I.A.S. No. 3853, 8 U.S.T. 900. 33The International Court of justice reached a similar conclusion in May 1980. See Case Concerning United States Diplomatic and Consular Staff in Tehran, Judgment of 24 May 1980, I.C.J. Reports (1980) at 28. 34Judge Lagergren's evaluation of the aforementioned reappraisal led him to the conclusion "that an application of current principles of international law, as encapsulated in the 'appropriate compensation' formula, would in a case of lawful large-scale nationalizations in a state undergoing a process of radical economic restructuring normally require the 'fair market value' standard to be discounted in taking account of 'all circumstances'.". INA, Lagergren, Separate Opinion at p. 8, reprinted in 8 IRAN-U.S. C.T.R. at 390. But see judge Holtzmann's Separate opinion where he pointed out that the statement in the Award was obiter dictum as the case was decided under the Treaty of Amity, not customary law, and explained why he considered the statement an erroneous characterization of the current state of customary international law. 35See Aminoil Award, supra, paras 78 and 144.
2019-04-19T22:35:27Z
https://www.trans-lex.org/232300/_/iran-us-claims-tribunal-phillips-petroleum-co-iran-v-iran-et-al-21-iran-us-ctr-at-79-et-seq/
In September 2009 I purchased a secondhand GRS train comprising a live steam GWR 2021 and seven waggons. I made this purchase because I have a garden railway with both gauge 1 and gauge 3 tracks, but had no operable G3 stock. I shall not go into how this situation arose, but the GRS train seemed like a perfect solution - instant train! - all I had to do was hand over lots of money. The engine is very attractive, and a British goods train would be a very welcome presence on the railway. Also I would buy myself some time to do the other tasks that we all manage to acquire for ourselves, before building my own G3 train. A perfect plan. A plan which went wrong; but you knew that. Here is the train on the track for the first time, not in steam. The assembly workmanship turned out to be fair, and sometimes a little less than that. This has turned out to be acceptable except for how it plays sometimes with the main problem, the latter being the subject of these notes. And so I shall get to the ugly part of my reporting right away. To be nice about it, the engine has turned out to be a pile of junk. And the problems lie mainly in the design and manufacture, some in the assembly and fitting; as I think the reader will discover. It should be noted that my criticisms are of the engine as a miniature live steam engine, not as a model of the prototype. I imagine that the electric powered version is a very nice engine; but an actual steam engine brings realities that cannot be fudged. And the carelessness inherent in the production of this engine reflects on the suppliers, who sell, not give away, the equipment. The market is a small-volume niche, of course; so large engineering efforts are not likely to be feasible. But a poor product that frustrates the market does not seem to be a good idea. I was mollified to discover that I am not the only one disappointed by this engine. As examples, I list things that other people have written about this (live steam) engine. These comments mostly are from gauge3.org.uk or g3forum.org.uk; my comments are in square brackets. ... the sluggish performance which has been reported by other owners. I have read complaints of the poor performance of the GRS R-T-R model of the 45XX class Prairie tank which I believe is powered by a similar steam motor; one owner complained his model would not move the GRS Great Western 2-coach "B Set"! Beware I had mine built due to pressure of work and teh guy who built swore he woul never touch one again! ... and a roundhouse burner was fitted to avoid it sounding like a doodlebug! OK, enough of that odious stuff. However, in my experience there is too much truth in the words above. My first attempt at running the engine was just awful, and so I put in quite a lot of work right away trying to put things right. But I ran into time constraints - which was somewhat bitter given my decision to spend money to get an "instant train" - and had to stop. That was more than a year ago. But now I have found the time to have another go at it. I have had some success, and hope for more. And I have realised that I really owe it to myself and the community to record information, success, and failure. And the method I think will work out best is for me to record things as I go along, diary style, from where I am today. This will not produce a logically correct ordering of topics; but it will get information recorded. So the rest of what is here is something of a blow-by-blow account. I trust that the content will be interesting enough for the reader to see beyond misteaks, contradictions, and reversals. And I shall be strident in my criticisms since I think that will be healthy in the long run. I am determined to get resolution this time around. The engine is going to run or be scrapped. Yesterday I had the best run ever of the engine; but the run came in two pieces. This image was taken after opening the throttle the first time. If you look closely at the coupling rod pin on the front axle you can see that it is blurred. As soon as the throttle was cracked the front wheels started to turn, but nothing else happened. It was quite funny really; the engine appeared to be standing on level track with nothing holding back the engine other than its own weight and friction, spinning its powered wheels (The motor drives the front axle only). So, I re-installed the coupling rods, which involved drilling out a bushing. I have no idea why that was necessary, it seems that some selective assembly had been done before and I was correcting some fitting error. On the re-start, the engine ran around the track, climbing and descending 1:33 gradients, running through junctions correctly, and generally behaving itself, for the first time. Hooray! However, it would only do this if I ran it backward; running forward caused a couple of derailments. So here is where I am currently. The engine is running uphill toward the camera with about 45 psi boiler pressure (I checked the gauge earlier against a large industrial gauge, the small gauge read nearly 50 psi with the large gauge reading 55 psi). The engine has never done nearly this well before; and I shall describe how I got it to this point - but after addressing the wheel spinning and derailments. After the successful run I checked the position of the centre of gravity both with an empty fuel tank and with a full fuel tank - actually, faked with a cup of water. The balance point runs from about 3/8 inch to about 7/8 inch behind the centre axle with the fuel tank empty and full respectively. The screwdriver stuck in at the right side is holding up the wood for the image shot; the reader will just have to believe me that this is the, highly unstable, balance point. What is more, if the engine is running forward and uphill, bearing in mind that the water in the boiler will move to the back of the boiler, what is the weight on the front wheels? I have no idea, but I do speculate that this, coupled with a bit of compensation sticking, might be the cause of derailment. I am not convinced of this idea, but it is something to think about; and there is that, strange sounding, claim by someone else that "It derails if I allow more than 25 p.s.i." I wonder if running backward would stop that problem. I have found what I consider to be a fundamental flaw in the motor design. This flaw can be largely eliminated and evidence that this is an important issue comes from "before and after" bench tests on air, as well as the improvement in track performance under steam. In the bench tests, which were on a chassis that had some tight spots and other unsmoothness, the air pressure required to keep the engine idling went from the 15-18 psi range to a, steadier, 8 psi. The problem lies in the porting of the motor. A common fundamental design objective in an oscillating engine is that there be instantaneous no-flow at dead centre, but in all other crank positions the cylinder be either consuming supply pressure steam or discharging to exhaust. Satisfying this objective can be achieved by having port diameters (ports usually are round) so that at dead centre the perimeters of the ports are line-to-line; and this, also, is usual. By line-to-line I mean that, if you could see them, the ports would look very similar to OOO, but with the edges of the Os touching. The ports from one side to the other of this simple diagram would be supply, cylinder, exhaust; with supply and exhaust in the fixed portblock and cylinder in the oscillating cylinder block, and clockwise rotation looking at the text diagram. If the no-flow design objective is not followed by having the ports too small then there is a part of each cycle of the engine where the cylinder is sealed but the piston is moving. This means that the cylinder (with piston) is compressing the exhaust followed by expanding it again. Even if these two actions cancel to some degree they will not be as powerful as using this part of the cycle to exhaust completely and start a new power stroke with fresh supply steam. Furthermore, the steam flow into the cylinder will not be as great as it would be with a larger port; and this will reduce the pressure in, and hence the power from, the cylinder. The extent of the power reduction needs analysis to establish its magnitude; but, intuitively, the ratio of the port diameter to the cylinder diameter will be a major factor. If the no-flow design objective is not followed by having the ports too large then there will be a part of the cycle where there will be a direct leak from supply to exhaust; which is wasteful, and also cuts down on the length of the power stroke. The geometry and trigonometry required to design or analyse an oscillating engine are quite straightforward and are all that is required. Steam expansion characteristics, for example, are not significant simply because of the (wasteful) consume-discharge cycle of the oscillating engine. The only important analysis improvement over the past forty years, since geometry and trigonometry have not changed, is the ability to mechanize the calculations with a computer; which permits the glorious, luxurious, ease, of "what if" calculations. However, an evening spent with paper, pencil, and a cup of tea can produce instructive and satisfying results. In the motor that I have, the ports in the portblock and the cylinder were 0.054 inch diameter (#54 drill bit). My measurements of the hardware and subsequent calculations assuming the instantaneous no-flow criterion give 0.078 inch diameter (#47) as the design target. So I convinced myself that the valve events for the engine were poor. Also, probably the steam flow was constricted excessively since the area ratio of the two port sizes is more than 2. I note that a common port size for oscillators with cylinder bores in the quarter to three-eighths inch (6-10 mm) range is 0.063 diameter (#52). And I have an 0 gauge engine with a single-cylinder, double-acting, 1/4 inch bore, oscillator, that, in the early days before I put a throttle on it, would run at 8 mph (about 5000 rpm for the motor); and that has 0.078 diameter (#47) ports and passageways. The first thing that I did to my motor is open up the ports to 0.078 diameter (#47). That is twelve ports, two on each cylinder, eight on the portblock. The objective being to change the valve timing to correspond to the instantaneous no-flow criterion. This was the only thing done between the "before and after" bench tests referred to earlier. Re-working the ports on the motor must be done carefully. The main problem is that twist drills "grab" so easily in brass under the conditions that pertain here, and they can vandalize an existing bore instantly. It is important not to use a drilling machine or power tool to do this job. Also, to avoid stripping down the block and cylinders it is important to avoid letting swarf fall into the engine parts. I proceeded by holding the workpiece in a bench vice with soft jaws with the portface angled down from the vertical; the idea here being that swarf would fall out-of rather than in-to the workpiece. Then I put a #47 drill bit into a pin vice and tightened it hard, I did not want the drill bit to slip part-way through the operation. I did not use cutting oil or anything similar; but I do not know if that was the best decision. I pushed the pin vice gently to start the cut - as square to the portface as I could - and just as soon as the drill started to bite, around half a turn, I pulled on the pin vice whilst continuing to make the cut. Another couple of turns saw the drill bit come out of the brass, and there was a counterbore about 1/16 inch deep. I checked for swarf in the hole and, in a couple of cases removed some with a dental pick. All the holes had a small volcano crater on the portface after the re-work; but it was easy to touch up with a honing stone - I used oil this time - and finally wash off the portfaces with naptha, alcohol, or something. The image shows the counterbores in the ports quite nicely. The one on the left appears to be a little deeper than that on the right. This difference is not important so long as the depth is adequate to allow free steam flow. Comparison of the equations of the port area and the cylindrical area of the counterbore gives a depth of a quarter of the port diameter for equality. But the requirement is more complicated than that. My guess is that anything greater than 0.04 (1 mm) is plenty. Definitely it is not worth the risks in taking a second cut simply to even up the depth. On the workbench, I made big picture decisions about how I should proceed with modifications and configuration of the whole Pannier Tank engine. Amongst other things I decided that the minimal running configuration should include the frame overlay. The reason for this decision is that the overlay provides the horncheeks and I guessed that the lack of these was at least partly the cause of binding of the compensation, leading to derailing. Also decided was that the superstructure, other than the overlay, should not be necessary for running. This decision requires elimination of parts of the superstructure from the runable configuration. In particular, the rear of the engine requires modification since, as designed, the gas tank has to be mounted after most other assembly. In general, the cab area is somewhat jumbled in the design. The status of the port modifications: are these adequate to declare the motor powerful enough, or do I need to delve deeper and see if the passageways can be opened up? Has the re-work of the compensation and fitting proper axle spacers solved the derailing problem? What is the status of the boiler steaming capability? A Roundhouse burner is on its way, but how much am I depending on it to solve steaming inadequacies? This replacement burner has been purchased to solve the noise problem, am I expecting too much effect on the heating performance, if so what is the boiler solution? My feeling was that satisfactory answers to these questions would convince me that I am able to make the engine usable, without making unrealistic expectations. So I did another track run yesterday. Needless to say it did not work out to plan; but, happily, possibly was better than expected. I ended up with a list of things in my head that I considered must be done to the engine, all feasible I think, which would yield a good result. What follows is an attempt to capture that list. The list is long, but fits with a gentler schedule. But, first, a picture; the engine is shown in minimal running configuration, hauling seven waggons, uphill. The engine not only started a seven waggon train on the level, but it would start the train on a 1:33 uphill gradient. Thus I am content with the port re-work, and deem that further investigation of the motor is not necessary. Speed on other parts of the track, admittedly mostly level or downhill, was more than adequate; this also suggests that the motor passageways are acceptable - not necessarily right, but not worth playing with either. Actually, the current performance is becoming a little difficult for me: I am beginning to wonder if it really was as bad before the port modifications as I have been claiming. I am very glad that there is a record of others finding performance lacking. The motor needed 30 psi (2 bar) to perform as noted above, and also to take the 1:33 hill with a running start, i.e., as part of normal running around the track. The effect of steam pressure variation was very noticeable, and close: a needle width above the 25 psi line (miniature gauge), and the train would make the hill; a needle width below the line, and it would not make it. It appears that this engine wants to be run at high pressure; I recall 45 psi (3 bar) being mentioned somewhere on the G3 forum; this appears to be a good target pressure for a responsive engine. The motor quickly cools off and becomes sluggish and messy, throwing around oily water. Superheating the supply with a stainless line through the burner flue, Roundhouse-style, is worth looking into. Lagging the motor might help, but I think probably is impracticable. Radio control is necessary on my hilly track. The train did run around with no throttle modification whilst running, hence slow uphill, fast downhill; but I do not like scale 200 mph goods trains, nor excessively toy-like behaviour. Sporadic hestitation was noticed; I think that this is binding of the coupling rods or somesuch. There was no noticeable change in the derailing problem; I was afraid this would be the result since I knew that I had done a good job of cleaning up the compensation, etc., the first time around. However, the nature of the problem came into sharper focus as a result of running this time, which was in the light of the weight distribution balance measurement. First, I note that almost all of my track is in good condition, with smooth, accurate pointwork and free of derailing tendencies during normal running. However, the sun has wrecked a multi-span wooden bridge, and the bridge is due for replacement. The damage manifests itself as a scalloping caused by each span drooping downward in the middle. And that results in three crests with five feet spacing along the length of the bridge. This trackwork is below acceptable standards and does cause problems at times with gauge 1 running. If the Pannier Tank is run backward there is no derailing, nor hint of derailing, running in either direction, at any speed, over the poor track, nor anywhere else. If run forward, the engine derails every time at the worst track crest, and sometimes elsewhere. Although I think that, when derailing occurs elsewhere, it is because the wheels already have been displaced on the known troublesome track. Derailing in other places always has been physically close to the scalloped bridge. It is quite clear that the engine suspension, including compensation, works properly running in one direction and not in the other. The sticking problem referred to before, and which still can be produced on the workbench, is a red herring. That problem can be produced only at the full extent of compensation travel and must be done by lifting one corner of the engine, so that the axles are cocked from side to side. The binding that occurs is a result of this misalignment which jams together the moving components; the jam will free fairly easily with some correcting pressure or tap. The necessary conditions for jamming do not occur when running. I do not have a good image showing the scalloping; but the reader may be able to see the effect in the image below. The Esso tank waggon appears to be going downhill, whilst the following five plank waggon appears to be (and is) going uphill. This is happening at the worst scalloped crest. By looking at the image of the balance check that appears earlier it is fairly easy to see what is happening when the engine derails. When running backward the centre of gravity is between the leading and centre axles, and the load on the compensation pivot forces down the compensation (which is quite free to move) and the leading axle, and this keeps the leading wheels on the rails. Hence there is no problem. When running forward these two factors also keep the centre and (now) trailing wheels on the rails; but the centre of gravity is just too far back to force down the front of the engine quickly enough, and the leading wheels are launched into space. Correcting this problem seems to require getting the centre of gravity further forward than it is currently in order to get more weight on the front wheels. One possibility is to rotate the motor so that it is in front of the front axle; but this would make the motor poke out beyond the buffer beam. However, rotating the motor also would make a superheating line through the burner flue a simpler installation. Chunks of lead also come to mind for weight re-distribution, of course. It is noted that for derailment to occur it does seem to be necessary for the track to be bad; the engine will not come off good track. So there is plenty of room for discussion about how far weight re-distribution should go. On my track the burner is not quite powerful enough to keep up the steam pressure; the motor seems to consume quite a lot of steam. I am hopeful that the Roundhouse burner will deal with this as well as the noise problem. The gas tank is too large for the boiler. The last run lasted a long time, more than an hour I think, and I terminated the run prior to gas exhaustion because I was becoming concerned about running out of water. The boiler was showing signs of decreasing steam generation capacity and now I think that the top of the flue became uncovered. Which would mean that heat was getting to the water partly by conduction along the copper around the flue wall rather than directly through the flue wall. A water gauge, mentioned by someone on the G3 forum, would be a useful addition to monitor this issue. Although the usual convention of having a gas tank sized so that the gas runs out before the boiler water makes more sense at this point. It is interesting that the boiler was built with blind bushes for boiler support and attaching the, dummy, smokebox door, but not with through bushes for water level checking or water supply during a run. This lack is not uncommon, but is disappointing, whenever it occurs. The extra cost of a couple of bushes, soldering during initial construction, and blanking plugs, is trivial. So it seems at the moment that a larger capacity burner and a smaller capacity gas tank are desirable modifications. My impression is that the engine uses a lot of gas. Well, there has been quite a long hiatus in my re-building activities - from April until November, in fact. The first reason is that I ran into trouble with my scalloped wood bridge (see an earlier image): I started to have fairly regular gratuitous uncouplings when trains were run over the scallop crests - this was with gauge 1 trains fitted with Kadee couplers. This track characteristic does not impress visitors and is a precursor of nasty crashes. So, my railway efforts went into replacing the wood bridge with a new steel bridge. Since I was doing a major replacement, also I installed a steam-up siding that I wanted but never thought would happen. The task is finished, but it has been a huge time sink. The images show the new bridge (20 feet long), the new steam-up bay, and a new junction leading to the bay. Mostly it is all new construction, although I have re-used some of the wood bridge sleepers and all the rail. In the images the bridge itself cannot be seen very well; it is a ladder, constructed from 1 inch by 1/8 inch strip and 1/2 inch square tubing, and the track rests on it with some lateral support, superfluous really I think, which stops the track moving sideways too far. I like to say that the bridge material is hot-rolled slag with steel enhancements; however, it was quite easy to solder with 56% silver solder. The remaining task is some minor gradient smoothing. But it is all good enough for me to get back to the G3 Pannier Tank re-build that I am quite anxious to complete. A note to make is that the images immediately below were taken in September; the weather was better, but a couple of tasks had not been completed, most noticeably the junction point actuation mechanism. Other than the gradient smoothing, all these tasks are complete now, in November, and this note should explain any minor differences in images as this description proceeds. I am continuing to write largely diary-style. There is already a problem with the railer on the end on the siding - it works so well that I like to load trains into this siding; so the siding has become a train-staging area, rather than a steam up bay. And that does not work if there are engines already in place on the siding. I may have to move the railer or make another. But then, in September, I ran into the second problem that became another delay for the Pannier Tank rebuild. When the track was first laid, in 2004, it was put into a newly created, and partly raised, walled, garden. This new garden had 18 cubic feet of soil put into the raised, walled area. Now, I understood the need to tamp down new dirt, and I thought that I had done this adequately. Hmmmm. It will suffice to write that, at present, much of the original track bed is at least six inches below the surface. If anyone reading this is thinking of laying new track on new dirt, please let this be a very serious warning - unless you like re-doing jobs until you are sick of them. So it got to the point where, at the worst sink area, I ran out of width - I was unable to raise the track anymore by stuffing rock underneath it because the stones would fall out of the sides and create a real mess. The effects of the angle of repose, and aesthetic and available space considerations, demanded another solution. Which meant another bridge (but only 6 feet long this time 8-). It all looks much better, and I am glad it is (almost) done. But it took me from September to November. And there is no image because we have a storm going through today and the railway is under a few inches of snow. I started today, the Second of November. The plan is to get all the technical stuff complete, test the engine, and then, assuming success, take it apart and re-paint it, etc. On their engines, Roundhouse run a copper tube down the length of the boiler flue as a superheater. I decided to copy this method; the perceived problem being installation with the need to get a non-kinked bend at the delivery ("smokebox") end with an accessible fitting on it. This task turned out to be quite simple, much to my delight. I soldered a fitting on the cab end; annealed the other end for about four inches; put a tube-bending spring on that end; fed the tube into the flue; pushed and pulled it sideways and out of the chimney flue with a wood spatula and pliers; removed the spring; and soldered another fitting onto the end. This image shows the steam pipe installation, which is the tube with the union in it. The steam pipe feeds into the top of the motor. The other tube is the exhaust, which comes out of the bottom of the motor. Also shown is a guard that I installed to protect the motor and, primarily, the main drive gears. I plan to insulate the steam pipe. I stayed at the front, and modified the chimney a little, which has two problems. One problem is that the bottom of the chimney fouls the boiler because not enough vertical room has been allowed for it where it pokes through the false tank superstructure. So the bottom of the chimney is hard against the boiler top once the superstructure is screwed in place; the screws being remote from the chimney area. I may shim up the superstructure when I do the final assembly; I do not think that there is much more that can done. The other problem is that, as designed, steam oil that is carried up the exhaust tube and into the chimney dribbles down onto the boiler and makes a mess on the boiler. I have wrapped my boiler with ceramic sheet and so I have soggy lagging as well as an oily boiler; see the image. The real plan is that the oil dribbles down the chimney flue and drops down onto your friend's track, making it his problem. The image shows how I dealt with the oil problem. I soldered a tube into the chimney. This tube extends down into the flue past the top of the exhaust tube, which is configured somewhat like a blast pipe in a smokebox. The hope is that this chimney extension will implement the real plan for spent oil distribution, instead of making a mess of my boiler. In the image, the chimney is upside down. When installed, the chimney passes through the superstructure seen in the background, and is secured by the large nut that is then on the inside of the superstructure. The nut becomes flush with the bottom of the chimney, and together they rest hard on the top of the vertical boiler flue. There is not a lot to write about. The images show the pipework before and after installing the coal bunker/butane tank. The Roundhouse burner can be seen, also the steam-pipe entering the flue alongside the burner. The regulator handle is temporary since no work has been done on installing radio control. The next thing to do was run the engine to find out what had worked, and what had not worked. I ran the engine as before, hauling my seven waggon goods train. It was a grey day: not a lot of light; my camera struggles under such conditions. In the image the dummy smokebox door is notable by its absence; but also note that the train is going up a 1:33 gradient. The train ran as before in many ways; but there were improvements. Here is what I saw. First, it must be recorded that there was no problem with derailment. Thus I conclude that the problem recorded earlier was in my poor track, resolved by my new steel bridge. The engine ran forward perfectly well; and I forgot to run it in reverse (the weather was closing in). The Roundhouse burner may or may not have improved performance; but it did not improve the noise level; the engine still whistles. I am beginning to wonder if a standing wave is being set up in the flue. This may sound somewhat wild to some people, but the fact is that the calculation for oscillations in a half-open tube 9.5 inches long gives a frequency in air in the range of 350-550 Hz, depending on the air temperature. This puts the sound in the range from F above middle C, to C one octave above middle C; which is about right to my, not very musical, ear. I do not know what could be exciting the oscillation, but the burner could be it. When I think about it, I have heard many other engines make similar whistling sounds; but not so loudly. So, the noise is there, may be difficult to quieten, and is very obnoxious. The superheater appeared to work well. Gone was the oily water spewing out everywhere. And the engine was more responsive; just what you would expect from dry steam. The boiler made enough steam, and the engine and train settled into a steady-state, running at about 40psi with the gas setting that I chose. When I stopped the train the pressure quickly rose to 50psi and the safety valve opened. This improved performance I am inclined to attribute to the superheater; but the Roundhouse burner may have helped. 40psi may seem like a high pressure, but recall that, part of the time, this is a heavy train being hauled up a 1:33 gradient. As before, there was no radio control, so the train was slow uphill, excessively fast downhill. The only significant negative item involves the safety valve, and, perversely, it concerns me for safety reasons. The safety valve is sealed in the boiler bushing with an o-ring. Before the run I saw that this o-ring had stretched, so I replaced it (010, Viton, 70 durometer). This seal started to leak as the pressure came up and continued to leak throughout the run. The safety issue is that failure of the seal easily could result in scalding water being thrown around. The problem is that the o-ring is not constrained by metal, only by its own strength. A common way of handling this kind of installation is shown in a diagram on the Roundhouse website under Technical, and then under Safety Valve: a groove can be undercut in the valve which, together with the attendant clamping pressure, provides adequate support at the relatively low pressures involved in small steam engines. In contrast, the safety valve on my engine is shaped as shown in the image. If there is enough metal in the safety valve body then I shall machine a groove in it; otherwise a change to a copper washer probably is a good idea. My engine runs a lot better than it did; it will start a fairly heavy train on a 1:33 uphill gradient; I have not tried it on rice pudding skins. There is more to do, for example I still want to fit radio control. But these notes already address their objective adequately. What follows is a summary of things to consider for anyone possessing one of these engines, especially if it is still yet to be built. In the text above I identified three areas to investigate: Motor Power, Derailing, and Boiler Steaming. Here I group summary in the same way, adding Miscellaneous as a fourth area. I think that other things could be done to the motor, for example the inlet and outlet banjos I suspect are a little restrictive on the steam flow. But opening the ports on my motor was completely adequate to give the engine sufficient power. Re-working the ports is a tricky task that needs experience in machining brass. First and foremost: the primary cause of the derailing was not in the engine, but in my track. However, I do not think that it is going too far to write that the engine is more prone to derailment when going forward than when going backward. And the reason is the axle loading configuration (or weight distribution). I am still intrigued by the comment that "It derails if I allow more than 25 p.s.i. ...". If your engine does derail consistantly, try running it in reverse as an experiment. It might be instructive. If your engine does derail consistantly, check that the compensation mechanism is working freely and not adversely affecting the axle loading configuration. My engine has a different size coupling rod pin on one wheel; I have no idea how this happened; perhaps the builder did it for some reason. But it is important because of the axle loading configuration. The driving wheels are lightly loaded compared to the coupled wheels. This means that the coupling rods really have to work on this engine, unlike on many models. Fit a superheater. This certainly made the engine a lot less messy and more responsive. But, also, I think that it improved efficiency, since superheaters insert energy just where it is needed. This resulted in the boiler being able to supply enough steam to meet demand, which was a little questionable without a superheater. If you do fit a superheater, read the Roundhouse notes on steam oil. Fit a Roundhouse burner. I do not know if this is worthwhile or not. It does not solve the awful whistling noise. If you know how to baffle the flue to eliminate the noise, please let everyone know 8-). Modify the chassis and superstructure to enable simple disassembly. Steam engines need maintenance. This is far from a small task; but if you do not do it and you do need to take apart something, you will be breaking things - and then you will be doing chassis modification. For example, the assembly instructions are to fold over the horn block retaining tabs to secure the wheelsets. Thinking about unfolding thin metal tabs and then re-folding them is your homework task. Do something about the safety valve sealing to avoid the possibility of a nasty leak. Fit radio control for realistic operation on gradients.
2019-04-21T19:01:28Z
http://ngdr.net/Manifold/GRS-Pannier-Tank/index.html
The Citadel vs. Wofford, to be played at historic Johnson Hagood Stadium, with kickoff at 2:00 pm ET on Saturday, October 10. The game will not be televised. Hey, a quick hoops update: learn to embrace the pace! Oh, and a little baseball news: the 2016 schedule is out, and the attractive home slate includes two games against Clemson — which will be the first time the Tigers have played The Citadel in Charleston since 1990. This week has been dominated by the aftermath of the extreme flooding that has affected almost all of South Carolina. That is particularly the case in Columbia, where I live and where on Wednesday the University of South Carolina was put in the position of having to move a home football game out of the city. The Citadel was more fortunate, as its home football game on Saturday will go on as scheduled. This is a big week at the military college, as it is Parents’ Weekend, when seniors get their rings and freshmen become official members of the corps of cadets. I was a little undecided as to what I would write about for this preview. The Citadel is coming off of a bye week, and there really isn’t much in the way of major news, at least of the non-weather variety. Later in this post I’ll have a small statistical breakdown of the Terriers, but I’m going to take the opportunity to make this a “theme” post. That theme? Mother Nature. Charlie Taaffe’s first game as The Citadel’s head football coach was scheduled to take place on Saturday, September 5, 1987. The opponent was Wofford; the venue, Johnson Hagood Stadium. Well, Taaffe did eventually coach that game, but it took place one day later, on September 6, the Sunday before Labor Day. The delay was necessitated by a week of rain (sound familiar?) that left the field (and just about everything else in the area) a soggy mess. The contest was rescheduled for 3:00 pm on Sunday. The corps of cadets marched to the game wearing duty uniforms, which no one in attendance could ever recall happening before. There was still rain in the vicinity at kickoff, but a decent crowd (given the circumstances) of 11,470 was on hand for the game anyway. By the time the second half began, the sun had made an appearance. Charlie Taaffe’s wishbone attack had made its appearance much earlier. Fourteen different Bulldogs ran with the football that day, led by Tom Frooman. Frooman had 101 yards rushing (on only nine carries), then a career high, and scored on the second play from scrimmage, taking the ball from Tommy Burriss on a misdirection play and rumbling 67 yards for a TD. The Citadel won the game 38-0; others in the statistical record included Anthony Jenkins (who intercepted a pass and returned it 33 yards, setting up a touchdown) and Gene Brown (who scored the final TD of the game on a 16-yard keeper). The Citadel’s offense ran 84 plays from scrimmage (compared to the Terriers’ 42) and rushed for 384 yards, controlling the clock to an enormous degree (44:16 time of possession). Two years later, bad weather would again cause a change of plans for a home football game at The Citadel. This time, the game was played on the day it was scheduled, but not at Johnson Hagood Stadium. It was a very different (and more dire) situation, but one that featured the same player in a starring role. Hurricane Hugo’s impact on Charleston and the rest of the Lowcountry is never too far from the minds of those who remember it. Among the footnotes to that time is the 1989 “Hugo Bowl”, a game between The Citadel and South Carolina State that was supposed to have been played in the Holy City, but was eventually contested at Williams-Brice Stadium in Columbia. There would have been a certain kind of hype attached to the game, which explains why a reporter for The Nation was one of the 21,853 people in attendance. However, any sociopolitical context had already been effectively blown away by the winds that had done so much damage to the state the week before. The Citadel had won its previous game at Navy, 14-10, but that victory had come at a cost. The starting quarterback for the Bulldogs, Brendon Potts, was lost for the season with a knee injury. His replacement was a redshirt freshman named Jack Douglas. Douglas made his first career start for The Citadel against South Carolina State. He scored two touchdowns while passing for another (a 68-yard toss to Phillip Florence, one of two passes Douglas completed that afternoon). Shannon Walker had a big game for the Bulldogs, returning a kickoff 64 yards to set up a field goal, and later intercepting a pass that, after a penalty, gave The Citadel possession at South Carolina State’s 6-yard line (Douglas scored his first TD two plays later). Adrian Johnson scored the go-ahead touchdown in the third quarter on a 26-yard run. The Citadel had trailed South Carolina State at halftime, but held the Orangeburg Bulldogs scoreless in the second half. The military college won the game, 31-20, and finished with 260 rushing yards — 137 of which were credited to one Tom Frooman (on 15 carries). The native of Cincinnati rushed for 118 yards in the second half, with a key 41-yard run that came on the play immediately preceding Johnson’s TD. Frooman added 64 yards on an 80-yard drive that cemented the victory (Douglas capping that possession with a 3-yard touchdown in the game’s final minute of play). Later in that season, the Bulldogs would return to Johnson Hagood Stadium on November 4, their first game in Charleston after the hurricane. The game was attended by a crowd of 15,214. The Citadel defeated Terry Bowden’s Samford squad, 35-16. That contest featured one completed pass by The Citadel (thrown by Speizio Stowers, a 16-yarder to Cornell Caldwell) and 402 rushing yards by the home team. Frooman led the way again with 113 yards and 3 touchdowns, while Douglas added 105 yards and a score. Raymond Mazyck picked up 92 yards and a TD, and Kingstree legend Alfred Williams chipped in with 55 yards on the ground. Tom Frooman had a fine career at The Citadel. He was an Academic All-American, and is still 13th on the school’s all-time rushing list. It is interesting that some of his best performances came in weather-altered games. Perhaps that says something about his ability to adapt. Or it could just be a fluke. Either way, the yards still count. Wofford is 3-2, 1-0 in the SoCon. The Terriers are 3-0 against FCS teams (Tennessee Tech, Gardner-Webb, Mercer) and 0-2 versus FBS squads (losing big at Clemson and close at Idaho). I’m inclined to ignore the game against Clemson (currently a Top-10 FBS team), and am not quite sure what to make of the Idaho contest (a long-distance road game played in a small dome). I’m just going to focus on the other three matchups. Wofford defeated Tennessee Tech 34-14 in Spartanburg on September 12, a week after playing Clemson. In a way, the game was closer than the score indicates; in another, it was not. Tennessee Tech scored a touchdown on its opening possession of the game, and had other chances to put points on the board. However, twice the Golden Eagles turned the ball over in the red zone. In the second quarter, Tennessee Tech advanced to the Wofford 20-yard line before Terriers safety Nick Ward intercepted a pass to thwart the drive. The opening drive of the third quarter saw the Golden Eagles march 69 yards down the field, only to fumble the ball away at the Wofford 4-yard line. A third trip to the red zone at the end of the game ended on downs. Despite those costly mistakes, Tennessee Tech actually won the turnover battle, as Wofford lost the ball three times on fumbles. Given all that, were the Golden Eagles unlucky to lose the contest? Well, no. Wofford dominated major portions of the game, controlling the ball (and the clock) with long, sustained drives. The Terriers scored four touchdowns and added two field goals, with each scoring possession at least nine plays in duration (Wofford’s second TD was the result of a 15-play, 73-yard drive). A seventh long drive (10 plays) ended in one of the lost fumbles. The Terriers averaged 6.9 yards per play, including 6.2 yards per rush and 12.9 yards per pass attempt (two quarterbacks combined to go 7 for 9 through the air, including a 25-yard TD). Wofford’s time of possession was a commanding 37:05, which is what happens when an offense has a successful ground game and converts 9 of 12 third-down opportunities; the Terriers ran 81 plays from scrimmage. Wofford finished with 562 total yards, more than twice the output of Tennessee Tech (which had 274). Winning this game by 20 points was a solid result for Wofford. Tennessee Tech had lost badly to Houston prior to facing the Terriers (no shame in that). Following their game in Spartanburg, however, the Golden Eagles defeated Mercer and Murray State (the latter a road game) before losing last week to UT Martin. On September 26, the Terriers shut out Gardner-Webb 16-0. That home game came one week after a 41-38 loss to Idaho in the Kibbie Dome. The contest was affected by a near-constant rain that put a damper on both offenses. Wofford won despite producing only 224 yards of total offense (including 159 yards rushing, averaging only 3.0 yards per carry). On defense, however, Wofford had six tackles for loss and limited the Runnin’ Bulldogs to 149 yards of total offense (and no points, obviously). Gardner-Webb averaged only 2.6 yards per play, never advancing past the Terriers’ 40-yard line. Wofford did manage another long scoring drive in the game, a 16-play, 96-yard effort that led to the game’s only touchdown. Placekicker David Marvin added three field goals, including a 50-yarder. Gardner-Webb is 1-3 on the season, with the lone victory coming in a squeaker against Virginia Union. The Runnin’ Bulldogs lost South Alabama by only 10 points in their season opener, but then dropped an overtime decision at home to Elon. Last week, Wofford escaped middle Georgia with a 34-33 win over Mercer, prevailing in overtime after the Bears missed a PAT in the extra session. Mercer scored 10 points in the final three and a half minutes of regulation, but was unable to score a potential game-winning TD late after having first-and-goal on the Wofford 4-yard line in the closing seconds. The Terriers got back to their running ways in this one, rushing for 391 yards on 52 attempts (7.5 yards per carry). The possessions weren’t as long in terms of total snaps (only one lasted more than eight plays), but they were efficient enough (five scoring drives of 64+ yards). Wofford had three runs of more than 50 yards in the contest. The passing game wasn’t in much evidence, as the Terriers only attempted six passes (completing four for a total of 43 yards). While Mercer’s missed PAT proved costly for the Bears, the game only went to overtime in the first place because Wofford had its own issues in the kicking game, as two of its field goals and an extra point were tipped/blocked (two by the same player, Mercer linebacker Kyle Trammell). Wofford also fumbled four times, losing two of them. When the dust had settled in Macon, the Terriers had won despite being outgained in total yardage (464-434) and being on the short end in terms of plays (89-58) and time of possession (a six-minute edge for the Bears). Mercer is now 2-2 on the campaign, having lost to Tennessee Tech (as mentioned earlier) and Wofford, with victories over Austin Peay and Stetson. Wofford passes the ball 15.3% of the time, with 21.1% of its total yardage coming through the air. The Terriers’ depth chart lists four quarterbacks, all separated by the “OR” designation, as in “one of these guys will start, you have to guess which one”. So far this season, three different signal-callers have started for the Terriers. Evan Jacks, who started last year’s game against The Citadel and rushed for 141 yards and two TDs, has thrown 30 of Wofford’s 48 passes this season, and is also second on the team in rushing attempts. He is averaging 5.7 yards per carry. Brad Butler and Brandon Goodson have also made starts at QB for the Terriers and could see action on Saturday. At least one of them is likely to do so (and the fourth quarterback, senior Michael Weimer, could also make an appearance). Wofford fullback Lorenzo Long rushed for 194 yards against Mercer, including a 60-yard TD run. Long rushed for 930 yards and 15 TDs last season. Halfbacks Nick Colvin and Ray Smith both possess impressive yards-per-carry statistics. Colvin is also tied for the squad lead in receptions, with five. You may recall that Smith had a 92-yard touchdown run versus Georgia Tech last year, the longest run by an opponent against the Yellow Jackets in that program’s entire long and distinguished history (and as I said last year, that is just amazing). Sophomore backup running back Hunter Windham has the Terriers’ lone TD reception. Wideout R.J. Taylor has five catches. Will Gay, who started at halfback for two of Wofford’s first three games, is out for the season with a knee injury. Gay was also a return specialist for the Terriers. On the offensive line, Wofford’s projected starters average 6’3″, 292 lbs. Right tackle Anton Wahrby was a first-team preseason All-SoCon selection; the native of Sweden was a foreign exchange student at Lexington High School (just your everyday 300-lb. foreign exchange student). He is majoring in French. Right guard T.J. Chamberlin, a preseason second-team all-conference pick, made his season debut against Mercer. Chamberlin missed the first four games of the Terriers’ campaign recovering from a knee injury. On defense, Wofford runs what it calls the “Multiple 50”. Usually, this involves three down linemen and four linebackers. The Terriers have had their share of injuries this season, though there is a sense that Mike Ayers and his staff can “plug and play” for most of those players missing time. One possible exception to that is nosetackle E.J. Speller, who was injured in the opener at Clemson. His gridiron career is now over after shoulder surgery. Replacing him in the lineup is Miles Brown, a 6’1″, 310-lb. freshman from Cheverly, Maryland, who attended Sidwell Friends School in Washington, DC. Perhaps he is pals with President Obama’s two daughters, who are also students at Sidwell Friends. Wofford suffered a blow when linebacker Terrance Morris, a second-team preseason all-league pick, hurt his knee prior to the start of the season. He is out for the year. Drake Michaelson, also a preseason second-team all-SoCon choice, is the league’s reigning defensive player of the week after making 11 tackles and returning a fumble 31 yards against Mercer. Michaelson and fellow inside linebacker John Patterson share the team lead in tackles, with 38. Jaleel Green had eight tackles against The Citadel last season from his strong safety position, including two for loss. Chris Armfield, one of the starting cornerbacks, was a second-team all-league preseason pick in 2014. Armfield has started all five games for the Terriers; indeed, every projected starter for Wofford on defense has started at least four times so far this year. As mentioned above, Wofford has had some issues with placekicking, but that has more to do with protection than the specialists. Placekicker David Martin is 7 for 10 on the season in field goal tries, with that long of 50 yards against Gardner-Webb. He is 15 for 16 on PAT attempts. Wofford punter Brian Sanders was the preseason all-league selection at his position. He is currently averaging less than 35 yards per punt; however, his placement statistics are good, with 7 of his 22 punts being downed inside the 20-yard line. Sanders also serves as the holder on placekicks. Long snapper Ross Hammond is a true freshman. His father, Mark Hammond, is the South Carolina Secretary of State. Ross Hammond’s maternal grandfather played in the CFL and AFL. Chris Armfield and Nick Colvin are Wofford’s kick returners. Colvin returned a kickoff back 100 yards for a touchdown against Idaho. Paul Nelson is the team’s punt returner; he had a 24-yard return and a 17-yard return versus Gardner-Webb. – Wofford has 38 residents of South Carolina on its roster, the most from any state. Other states represented: Georgia (21), Florida (16), Tennessee (12), Ohio (8), North Carolina (7), Kentucky (4), Virginia (2), Wisconsin (2), Minnesota (2), and one player each from Alabama, Maryland, Arizona, and Oklahoma. As previously noted, offensive lineman Anton Wahrby is a native of Sweden. – Per one source that deals in such matters, Wofford-The Citadel is a pick’em. The over/under is 48. – Apparently it is going to be impossible for The Citadel to play a home game at Johnson Hagood Stadium this season under pleasant weather conditions. The forecast on Saturday from the National Weather Service, as of this writing: showers and thunderstorms likely, with a 60% chance of precipitation. – There will be a halftime performance by the Summerall Guards. – The Citadel is reportedly wearing its “blazer” football uniform combination for this contest. It’s an apparent effort to make sure cadet parents attending their first football game at The Citadel will have no idea what the school’s official athletic colors actually are. I’ll be honest here. I have no idea how Saturday’s game will play out on the field. There are a lot of factors involved that only serve to confuse the situation, including potential weather concerns, personnel issues, how The Citadel will perform after a bye week, Wofford’s occasionally inconsistent play (mentioned by Mike Ayers on the SoCon teleconference)…there is a lot going on, and that’s even before you get to Parents’ Day and the hoopla associated with it. The players and coaches can’t worry about the way the game is called. They have enough to worry about. However, there is no question that plenty of people who follow The Citadel have little to no confidence when it comes to getting a fair shake from SoCon officials, particularly after last year’s officiating debacle in this matchup. I can’t say that I blame them. SoCon commissioner John Iamarino may not appreciate those negative opinions about his on-field officials, but Bulldog fans have long memories. I hope The Citadel wins. I also hope there isn’t another egregious officiating mishap that affects the outcome of the game. I’m sure everyone feels the same way. Stay dry, and fill up the stadium on Saturday. The Citadel vs. Samford, to be played at historic Johnson Hagood Stadium, with kickoff at 1:00 pm ET on Saturday, November 15. The game will not be televised. In this post, “Bulldogs” refers to The Citadel, while “Birmingham Bulldogs”, “SU”, or “Baptist Tigers” will serve as references to Samford. It is possible that this is Pat Sullivan’s last year coaching at Samford. Sullivan has had serious health problems in recent years, and missed the first three games of last season while recuperating from back surgery. This year, Sullivan missed the season opener at TCU (where he was once the head coach) as he recovered from cervical fusion surgery; he coached Samford’s league opener against VMI from the press box. Most of the coach’s health issues can be traced from chemotherapy and radiation treatments he received after being diagnosed with throat cancer in 2003. Defensive coordinator Bill D’Ottavio was the acting head coach against TCU and has handled a considerable amount of media obligations on Sullivan’s behalf throughout much of the season. If this is Sullivan’s final season (and I have no idea if it is), he’s having another solid campaign. The Birmingham Bulldogs are 6-3 and have clinched the program’s fourth consecutive winning season. Samford won’t win a piece of the SoCon title this year as it did in 2013, but it could finish as high as second. Last week’s victory over Western Carolina was Sullivan’s 46th win in his eight years at Samford. That made him the school’s alltime winningest football coach. The school’s field house will be named in his honor. Samford has not played a “like” non-conference opponent. Besides TCU, the Birmingham Bulldogs have played two small Alabama schools, Stillman (which competes in Division II) and Concordia (which isn’t an NCAA or NAIA member; it plays in the USCAA). After it plays The Citadel, Samford will finish its season by playing at Auburn, Pat Sullivan’s alma mater (and where as a quarterback he led the team to three bowl games and won the Heisman Trophy in 1971). Samford beat the two Alabama colleges by a combined score of 107-0; it lost to a powerful TCU squad 48-14. As far as evaluating SU is concerned, then, it’s best to simply focus on its games in SoCon play. The first conference opponent Samford faced was VMI, in the third game of the season. SU destroyed the Keydets 63-21. SU led 49-0 at halftime, delighting the partisan home crowd, and rolled up 525 yards of total offense (including 180 yards rushing for Denzel Williams). Samford’s next game was a 38-24 loss at Chattanooga. Starting quarterback Michael Eubank threw for 244 yards and two touchdowns, but was also intercepted three times and sacked four times. One week after averaging 7.5 yards per play, SU was held to 4.7 yards per play by the Mocs (despite 129 yards receiving for Karel Hamilton). Samford also allowed a punt return TD in the contest. The Birmingham Bulldogs rebounded with a 21-18 home victory over Mercer. Samford led the entire game, and was up by 11 points with less than a minute to play when Mercer’s Chandler Curtis returned a punt 99 yards for a touchdown. SU recovered the ensuing onside kick to preserve the victory. Hamilton had 10 catches and 101 receiving yards, while Jaquiski Tartt had eight tackles and also intercepted a pass. After a bye week, Samford lost at home to Wofford 24-20. The Terriers took the lead with less than five minutes in the game, then stopped SU on 4th-and-1 from the Wofford 24, gaining possession and the win. Samford’s D held Wofford to 3.8 yards per play (and under 200 total rushing yards, though Terriers fullback Lorenzo Long did rush for 128 yards on 20 carries). Michael Eubank passed for 305 yards and a TD (he also threw a pick). Samford was 3-13 on third down conversions and only rushed for 49 yards, which contributed to Wofford’s edge in time of possession (over seven minutes). The following week, SU destroyed Furman in Greenville 45-0. Samford led 14-0 after less than three minutes, having only run one offensive play. A blocked punt for a TD opened the scoring for the Birmingham Bulldogs, and they never looked back. Denzel Williams rushed for 101 yards and two touchdowns, while Eubank had another 300-yard passing day. Karel Hamilton had 206 yards receiving (on nine catches). In beating Western Carolina 34-20 last week, Williams rushed for 156 yards and two more TDs, while Hamilton had another 100-yard receiving day. Justin Cooper had 14 tackles to lead a defense that intercepted two WCU passes. The next three sections include statistical team/conference comparisons for SoCon games only (unless otherwise indicated). Samford has played six league games, facing every conference team except The Citadel. The Bulldogs have played all but two SoCon teams, Samford and VMI. In those six conference matchups, Samford’s offense has thrown the ball (or been sacked attempting to pass) 46.3% of the time. Passing yardage accounts for 57.8% of SU’s total offense. Samford is second in scoring offense (34.5 ppg) and total offense, and also second in the league in yards per play (5.9). The Citadel is next-to-last in total defense and is allowing 7.2 yards per play, but is actually fifth in scoring defense (28.2 ppg). SU leads the league in passing offense, averaging 252.7 yards per game in conference action. Samford is third in the SoCon in passing efficiency, with nine touchdowns and four interceptions. SU quarterbacks have been sacked twelve times, tied with Mercer for the most allowed in league play. The Birmingham Bulldogs have averaged 32.2 pass attempts per game, which is more than every league team except Furman and VMI. Samford is averaging 7.9 yards per pass attempt, which is fourth in the SoCon. The Citadel is sixth in pass defense, but dead last in defensive pass efficiency, allowing 9.5 yards per pass attempt. In five league games, the Cadets only have five sacks and three interceptions. The Birmingham Bulldogs are fourth in rushing offense (4.4 yards per carry), averaging 185 yards per game. Samford’s 17 rushing touchdowns are second in the conference, behind Chattanooga. The Citadel is next-to-last in rushing defense, and is allowing a league-worst 6.2 yards per rush. Samford is fourth in offensive third down conversion rate (42.5%). The Citadel is fifth in defensive third down conversion rate (44.8%). SU has a red zone TD rate of 60%, second-worst in the league (but well ahead of Furman’s abysmal 28.6%). The Citadel’s red zone D has been solid, with a TD rate of 47.3%, second-best in the league (behind only Western Carolina). Samford is third in scoring defense, allowing 20.2 points per game. SU is also third in total defense (4.5 yards allowed per play) and rushing defense (3.9). The Citadel is third in total offense (averaging 5.5 yards per play) and leads the league in rushing offense (a category in which the Bulldogs rank second nationally, trailing only Cal Poly). The Bulldogs are next-to-last in passing (averaging only 6.4 yards per attempt), but are actually fifth in passing efficiency. Samford leads the league in passing defense, allowing 141 yards per game (which is third nationally). SU is also first in the SoCon in pass efficiency defense, and leads the conference in interceptions (9). At 49.4%, The Citadel is second in the SoCon in offensive third down conversion rate, behind only UTC. Samford is second in defensive third down conversion rate (32.3%), so this will definitely be something to watch on Saturday. The Citadel has an offensive TD rate of 66.7%, tied for third-best in the league. Samford’s red zone defensive TD rate is 76.5%, sixth-best in the conference. Samford is +2 in turnover margin in league action, while The Citadel is +1. As far as time of possession is concerned, The Citadel has held the ball for an average of 31:25, second-highest in the conference. Samford is next-to-last in that category (28:39). That hasn’t prevented the Birmingham Bulldogs from leading the league in offensive plays. Samford’s hurry-up style has led to it averaging 2.58 plays per minute in SoCon games when on offense. Conversely, The Citadel runs 2.33 plays per minute when it is on offense. Interestingly, the two teams have run almost the exact same number of offensive plays per game (73.8 for Samford, 73.4 for The Citadel). The Citadel is tied for the second-fewest penalties per game in SoCon play, while Samford has the second-most. On the other side of the coin, SU opponents commit more penalties per game than all but one team in the league (VMI). As its fans know all too well, The Citadel does not get the benefit of having a lot of flags thrown on opposing teams in SoCon contests; only Wofford has seen fewer in this category. Samford quarterback Michael Eubank (6’6″, 246 lbs.) is a native of California who was the No. 8 high school dual-threat QB in the nation in 2011, per Rivals.com. He would up attending Arizona State for three years, redshirting his freshman year and then playing in 20 games over the next two seasons, rushing for seven touchdowns and throwing for four more. In January of 2014, Eubank transferred to Samford. This season, he is completing 64.7% of his passes, averaging 7.7 yards per attempt, with ten touchdowns and six interceptions. Eubank also has five rushing touchdowns. Denzel Williams (5’10”, 191 lbs.) is the workhorse running back in Samford’s spread offense. The redshirt sophomore has 157 of the team’s 392 rushing attempts this season; Eubank is the only other player with more than 37. For the season, Williams is averaging 87.8 yards per game and 5.0 yards per carry; with 15 rushing touchdowns, he also leads the SoCon in scoring. Williams had 180 yards rushing against VMI, and also had 100-yard efforts against Furman and Western Carolina. Karel Hamilton (6’1″, 190 lbs.) is far and away the leader in receptions for Samford, with 45. The sophomore is averaging a sterling 16.4 yards per catch, with six TDs. As mentioned earlier, Hamilton had 206 yards receiving against Furman; he also had 115 yards receiving versus Western Carolina, 101 yards against Mercer, and 129 yards versus Chattanooga. Tight end Tony Philpot (6’2″, 243 lbs.) was a second-team all-league selection in the preseason. Average size of the starters on Samford’s offensive line: 6’4″, 299 lbs. Right tackle Gunnar Bromelow, a preseason first-season All-SoCon selection, is the biggest of the group; the redshirt junior checks in at 6’6″, 305 lbs. Right guard C.H. Scruggs was a second-team All-SoCon preseason choice. Four of the five o-line starters are in their fourth or fifth year in the program. In my opinion, free safety Jaquiski Tartt (6’1″, 218 lbs.) is one of the two best defensive players in the league (along with Chattanooga’s Davis Tull). He had a pick-6 against The Citadel in 2012. Tartt is second on the team in tackles, with 57. Tartt was one of two Samford defensive backs to get a first-team preseason All-Conference nod. James Bradberry, a 6’1″, 205 lb. cornerback, was the other. Bradberry spent one year at Arkansas State before joining the Birmingham Bulldogs’ program. Strong safety Jamerson Blount (6’1″, 190 lbs.) leads the team in passes defensed and is also third in tackles. He is one of 22 players from Florida on the SU roster. Samford’s leading tackler is middle linebacker Justin Cooper, a 6’2″, 230 lb. redshirt junior who began his college career at Texas Tech. Cooper has 5.5 tackles for loss this season (69 overall) and is the reigning SoCon Defensive Player of the Week. Fellow linebacker Josh Killett (6’2″, 220 lbs.) has six tackles for loss as part of his 40 overall tackles. Along the defensive line, Samford is quite imposing. There are a lot of players in the rotation (including three noseguards on the two-deep), and plenty of individual size and skill. Michael Pierce, a 6’0″, 309 lb. defensive tackle who spent his first two years in college at Tulane before transferring to Samford last year, was a first-team All-SoCon preseason selection. He has 33 tackles this year, including five tackles for loss. Mike Houston called Pierce “one of the better d-linemen in the league” in his weekly press conference. Pierce’s younger brother Myles is a freshman linebacker at The Citadel who had a tackle last week against Furman. One of three players listed on the depth chart at the “stud” position, Roosevelt Donaldson (6’2″, 258 lbs.), leads the team in tackles for loss, with seven. He also has the most sacks (four). For Samford, both kicker Warren Handrahan and punter Greg Peranich were first-team preseason picks for the All-SoCon team. Peranich is averaging 43.1 yards per punt, with 14 of his 41 kicks downed inside the 20 (against four touchbacks). However, two of his punts this season have been returned for TDs. Samford is in the bottom five nationally in average punt return allowed (17.77 yards). Handrahan is 5-9 on field goal attempts this season, with a long of 47. Last season he was 19-24 on field goal attempts, with a long of 48. That included two field goals against The Citadel (including a 44-yarder). He did not kick in Samford’s victory over Western Carolina last week. Backup placekicker Reece Everett was 2-2 on field goal tries (and is 4-5 for the season). Everett is listed as this week’s starter on the two-deep. Samford’s kickoff specialist is Michael O’Neal. Almost 25% of O’Neal’s kickoffs have resulted in touchbacks; he has only kicked the ball out of bounds once this year. Nationally, SU is 43rd in kickoff return average (21.0 yards/return) and 61st in kickoff return defense (19.8 yards/return). Robert Clark, a 5’9″, 173 lb. wide receiver, is Samford’s primary kickoff and punt returner. His longest kick return this season was for 45 yards. From 2010-2012, The Citadel’s offense only scored a combined total of 34 points in three games against Samford’s “Bear” front. In those three games, the Bulldogs faced third down on 39 occasions, converting only six of them for first downs. Last season’s game was different. The Citadel was 8-17 on third down and scored four rushing touchdowns while rolling up a respectable 338 yards rushing. The Bulldogs overcame a 17-0 deficit to win 28-26, with Vinny Miller rushing for 95 yards. The Citadel only passed for 55 yards in that contest, however (on 16 attempts). If the Bulldogs hope to win on Saturday, they will likely have to throw for more yardage than that, and more effectively as well. The Citadel only attempted two passes, completing one of them. I’ll bet you thought Jack Douglas threw that completed pass, but nope: it was Speizio Stowers with a 16-yard pass to Cornell Caldwell. Douglas threw the other Bulldog pass in that game, which fell incomplete, but we’ll cut him some slack, since he rushed for 105 yards and a touchdown while directing an attack that finished with 402 yards rushing. Tom Frooman had 113 of those yards and three TDs, while Raymond Mazyck added 92 yards on the ground and a score. Also prominent in the statbook that day: Kingstree’s own Alfred Williams, with 55 yards rushing on 11 carries. Care to guess what the attendance was? Remember, Charleston was still in major recovery mode from the hurricane (you could say the same about Johnson Hagood Stadium). Okay, the answer: 15,214. Think about that, especially when compared to recent attendance at The Citadel (and elsewhere, for that matter). – Speaking of the game notes, I didn’t realize Jake Stenson became the first Bulldog since Andre Roberts in 2008 to score a rushing and receiving touchdown in the same game. Kudos to him. – The 22 positions on offense and defense for The Citadel have been started by a total of 32 players — 18 on offense, and only 14 on defense. Eleven Bulldogs have started every game, including seven on defense. – The Citadel Athletic Hall of Fame will enshrine six new members this week. Two baseball players, 1990 CWS hero Hank Kraft and Rodney Hancock (the scourge of Furman) will be inducted. All-American wrestler Dan Thompson will be enshrined, as will football lineman Mike Davitt, a mainstay during the Red Parker era. Charleston mayor Joe Riley and basketball player/cookbook author Pat Conroy will be recognized as “honorary” members. – The 1:00 pm ET start time will be the fourth different start time for a game at Johnson Hagood Stadium in 2014. Other start times: Noon, 2pm, and 6pm. – Only one player on Samford’s roster, reserve defensive lineman Cole Malphrus, is from South Carolina. The junior is from Hilton Head. There are 28 natives of Alabama playing for SU, along with 22 each from Georgia and Florida. Tennessee is represented by seven players, while four hail from Mississippi, three from California, and two from North Carolina. There is even one Alaskan playing for the Baptist Tigers (freshman defensive back C.J. Toomer). – This week in the Capital One Mascot Challenge, Spike The Bulldog faces Aubie The Tiger, the mascot for Auburn. This is a tough matchup for The Citadel. It’s an opponent with a defense that has a history of success against the triple option (last year notwithstanding) and an offense that would be expected to do well against the Bulldogs’ pass D. The key to the game for The Citadel is to keep Samford’s offense off the field as much as possible. The SU defense has been good at stopping teams on third down this season; the Bulldogs have to reverse that trend on Saturday. Samford has had some results that might give The Citadel some confidence, including its games against Wofford and Mercer. On the other hand, the Birmingham Bulldogs drilled Furman (which took The Citadel to overtime just last week) and handled Western Carolina with relative ease. The Citadel can win this game, but it will probably take the Bulldogs’ best performance of the season. That includes a team effort from not only the offense and defense, but also the special teams, which were subpar against the Paladins (to say the least). I am a little worried about the atmosphere on Saturday. After the big Homecoming win over Furman, this game might be anticlimactic to some. It shouldn’t be that way for the team, however. There are still goals to pursue for these Bulldogs, including a third straight victory and a chance to finish the year with a winning season in conference play. I’m looking forward to this contest. It’s a home game, after all. There aren’t that many of them in a given season. You have to treasure them all, especially when there won’t be another one until next September. Samford 19, The Citadel 14. — Fashion update for this week: The Citadel went with the navy jerseys/white pants look for Homecoming, which I guess is its postmodern traditional look. It was the first time the Bulldogs wore that combo this season; they also wore them once last season, in the game against Chattanooga. The Citadel lost both games. — The Citadel has now lost five consecutive “celebration weekend” games — in other words, Parents Day/Homecoming contests. It’s only the third time the Bulldogs have lost five straight PD/HC games, and the first time since the 1985-1987 seasons. I think that’s significant because those are generally the two most highly attended games of each season. Continuing to lose those contests isn’t going to engender a lot of enthusiasm among the alums and supporters at the games. Of course, attendance on Saturday dipped below 14,000, a very disappointing crowd for a Homecoming game on a nice Saturday afternoon. Then there is the “TV jinx”: The Citadel has now lost 16 of its last 17 televised games (counting ESPN.com), which is ridiculous. That total includes the last seven seasons. It could rise to 17 for 18 after this week’s game against South Carolina. — Samford ran 79 offensive plays from scrimmage, exactly what the Birmingham Bulldogs wanted to do, and those plays were not completely imbalanced in terms of run/pass. While The Citadel held the time of possession edge, Samford was able to sustain a number of drives, with five of them going for nine plays or longer. Dustin Taliferro managed to throw 45 passes without being intercepted. Samford also rushed for 113 yards, lower than it would have liked but just enough for the victory. Of course, a lot of those yards came on the game-winning drive. — The Citadel lost two fumbles, which hurt (particularly the second one), but the loss can be attributed in large part to the two blocked field goal attempts. The Bulldogs have now had four placekicks blocked in the last two games. From my vantage point, the problem on Saturday was a protection issue. However, I might be wrong about that. Kevin Higgins stated after the game that “”We know our operation time is slow from the center back to the holder,” but this photo does make one wonder. It goes without saying that it is unacceptable to have four kicks blocked over a seven-kick span. It appears that Georgia Southern exploited a flaw, and that this was not adequately addressed in the week leading up to the Samford game. The Citadel has now lost three league games this season because of placekicking unit issues. I’ve said this before (actually, last week), but the Bulldogs do not have enough margin for error to survive continued woes in this area. The SoCon is an unforgiving league; if a team has a weakness, it will pay for that weakness more often than not. — The playcalling at the end of the drive that resulted in the second blocked field goal was…frustrating. I realize that a lot of this is predicated on QB reads, but the sequence on first-and-ten at the Samford 11-yard line went like this: Darien Robinson up the middle for two yards, Darien Robinson up the middle for a one-yard loss, Darien Robinson up the middle for no gain. Oof. I’m not calling the plays, and everyone should be thankful that I’m not, but a little something different had to be in order there. Toss sweep, anyone? — I am on record as saying that alums have at times been a little hard on the corps of cadets, but I was very disappointed in the corps’ performance on Saturday. The upperclassmen did not even bother to stand for the opening kickoff. I’m sorry to be an old fogey, but that’s simply not going to cut it. If the cadets are so tired that they lack the energy to cheer on their team for three hours, then I think they are clearly too exhausted to go out on the town after the game. My recommendation to Gen. Rosa and Col. Mercado would be to let the clearly fatigued young men and women of the corps stagger back to campus immediately after the game is over and head straight to bed. There is no need to worry about overnights/extra hours of leave, as an 8 pm lights-out would be much more appropriate. Davidson entered the game with an RPI of 49. The Wildcats have dropped out of the top 50 of the RPI following the loss to The Citadel (as of Thursday the Wildcats are at 56), but will almost certainly finish the season in the top 100. To be honest, I am not completely sure when the Bulldogs last recorded a victory over a “Top 100 RPI” team. I believe that it has not happened since 1989, when The Citadel beat South Carolina. Incidentally, The Citadel’s RPI has jumped up to 148 (I’m using ESPN’s RPI numbers). The Bulldogs are one spot ahead of none other than VMI. Of course, Davidson didn’t have Stephen Curry last night, and that certainly made a difference. Whether it made enough of a difference to have changed the outcome of the game is debatable. In the first game between the two teams, Curry put up 32 points (with only 16 FG attempts) and added five assists — one assist more than Davidson had as a team last night. Even if you didn’t count Curry’s shooting numbers, though, Davidson still had a good FG% as a team in the game at McAlister Field House (although obviously with teams having to concentrate on Curry, his teammates have better opportunities). The Citadel and Davidson are 1-2 in the league in FG% defense (the Wildcats lead that category) and in 3FG% defense (with the Bulldogs ranked first). Given that, it’s not surprising that the game featured poor shooting by both teams, and without its star, Davidson never got into a shooting rhythm. The Wildcats could not even make free throws (9-17 for a team that averages 71% from the line). What should concern Davidson more than the bad shooting, though, was the fact that the Wildcats were not able to contain the Bulldogs on the boards. The Citadel had a season-high 48 rebounds last night to Davidson’s 31 (after Davidson won the rebounding battle 35-25 in the first matchup). Demetrius Nelson had a big night scoring inside, but he had scored 18 points in the first game, so that wasn’t a major surprise. The difference was that he also added 14 rebounds (after only having 4 against Davidson at McAlister) to the Bulldogs’ cause. Davidson did have 13 offensive rebounds, but when you miss 73% of your shots from the field, you’re going to get more opportunities for boards on the offensive end of the floor. John Brown had 12 rebounds in 22 minutes of action. That’s the fifth time this season he’s had 12 boards in a game (he’s now hit that mark three times in a row). Brown has played more than 20 minutes in ten games this season. He has had double digit rebound totals in seven of them (and nine boards in of one of the others). That’s not even counting his 12-boards-in-15-minutes performance against Samford. Brown is averaging 13.47 rebounds per 40 minutes of play (14.75 per 40 over his last four games). When he stays out of early foul trouble, he is a force. Davidson leads the league in turnovers forced, and The Citadel committed a few too many last night (13). The Bulldogs had 19 turnovers in the first matchup, so they improved a little, but again Curry’s absence has to be considered (he had five steals in the January game). On the flip side, despite missing its point guard, Davidson only committed seven turnovers. Nelson missed five free throws, the only blip in an outstanding effort. Cameron Wells was 8-8 from the charity stripe, though, which alleviated an off-shooting night for him from the field. Everyone who has been following the Bulldogs is excited right now, and deservedly so, but I want to sound a note of caution. I mentioned earlier in this post that the last time The Citadel won a road game against a top-100 opponent was against South Carolina in 1989. That year had some parallels to this season. In 1989, The Citadel was trying to rebound from an 8-20 campaign. The team started the year slowly, but gradually improved. The win over the Gamecocks was the exclamation point on a run during which the Bulldogs won six out of seven games, including a beatdown of longtime hoops bully Marshall (the final game ever played at Deas Hall, the most fantastic Division I basketball arena in human history). Earlier in the year The Citadel had also beaten the College of Charleston on the road, which would be the last win at the CofC for the Bulldogs until this season. With two games remaining in the regular season, the Bulldogs were in a position to claim second place in the SoCon regular season, with an outside shot at first. The Citadel wouldn’t win another game. The Bulldogs lost a tight game on the road to Western Carolina, then lost at UT-Chattanooga, and then lost in the first round of the Southern Conference tournament to East Tennessee State (which would then proceed to win the tourney). I’m not saying we’re in for a repeat of 1989. For one thing, this year’s team is simply better. You can ask Ed Conroy — after all, he played on the 1989 team. It’s just that there is still work to be done this season, and to consolidate all the gains made on the court this year, the team needs to finish strong. Also, while I don’t want to be perceived as being overly pessimistic, I think it’s important to acknowledge that the margin of error for the program is still small. It’s not as small as it has been, though, and that’s a credit to Conroy and the players. The Southern Conference tournament is going to be tough for everybody. If you’re The Citadel, you have to worry about Davidson (with Curry), UT-Chattanooga (a good team, and the tourney host), the College of Charleston (can the Bulldogs really beat that team three times in a row?), and a bunch of other squads that could pose matchup problems. Drawing Elon or Appalachian State in the tourney would not be fun. That’s why getting the bye is so important. Speaking of that, the “magic number” for The Citadel to clinch a bye in the tournament is now 2. For those unfamiliar with the “magic number” concept (it’s a baseball expression), what that means is any combination of two Bulldog victories or College of Charleston losses will guarantee a bye for The Citadel. Two Bulldog wins would do it, as would two CofC losses. One Bulldog win and one Cougar loss would be enough. The CofC has four games remaining, and The Citadel has three. The Citadel now has eight days before its next game. I don’t know if that’s a good thing or a bad thing. I’m inclined to think it’s a good thing, because the Bulldogs probably need a bit of a break. There is always the fear that the team will lose momentum, but I believe it helps that when they play again, it will be at home before what should be a very good crowd. I can’t wait.
2019-04-20T20:40:45Z
https://thesportsarsenal.com/tag/jack-douglas/
Comment: UO Secure and 'eduroam' were unavailable between approximately 9:45am and 10:07am today (3/14). Comment: Some WiFi users are not able to connect to UO Secure and eduroam currently. Staff are working to restore service, which should occur within the next 5 minutes. Comment: UO Secure and 'eduroam' have been restored after an outage between approximately 11:45am and 12:13pm Tuesday, March 12. Comment: We have received reports that UO Secure is unavailable. Staff are working to restore service. Comment: We received reports today (Friday, March 8) of some people having trouble logging in to the UO Secure wireless network. Staff believe they have now resolved that issue. Comment: Parts of Kalapuya Illihi are without WiFi. Staff are working to restore service, which is dependent on restoring Ethernet in that building. Comment: Power outages affecting parts of east campus are causing some network service outages and degradations. Network services are expected to be restored when power is restored to the affected locations. Please refer to EWEB's power outage map: http://www.eweb.org/outages-and-safety/power-outages/power-outage-map. Comment: Wireless service in the EMU was unavailable from approximately 1:20pm to approximately 2:20pm. Service has been restored. Comment: Wireless service in the EMU was unavailable from approximately 1:20pm to approximately 2:20pm. Service is being restored. Comment: On Monday (12/17/18), some users on the east side campus (east of the EMU) reported problems connecting to UO Secure. (When users attempted to connect, they received the message, "The authentication server is unresponsive.") Staff are monitoring the service and doing troubleshooting when an affected customer is identified. If you are experiencing this error when attempting to connect to UO Secure, please contact the Technology Service Desk at 6-HELP (6-4357). Comment: On Monday (12/17/18), some users on the east side campus (east of the EMU) reported problems connecting to UO Secure. (When users attempted to connect, they received the message, "The authentication server is unresponsive.") Today, staff are monitoring the service. If you are experiencing this error when attempting to connect to UO Secure, please contact the Technology Service Desk at 6-HELP (6-4357). Comment: Users on the east side campus (east of the EMU) have been having problems connecting to UO Secure beginning on Monday (12/17). When users attempt to connect, they receive the message, "The authentication server is unresponsive." One work-around is to connect to "eduroam". Staff are working to restore service. Comment: Users on the east side of the EMU have been having problems connecting to UO Secure. When users attempt to connect, they receive the message, "The authentication server is unresponsive." One work-around is to connect to "eduroam". Staff are working to restore service. Comment: Some users at the Knight School of Law are not able to connect to UO Secure. Others are able to connect. Staff are investigating the problem to restore service fully. Comment: UO Secure is currently unavailable at the Knight School of Law. Staff are working to restore service. Comment: Wireless network services are currently unavailable at Spencer View Apartments. Staff are working on the issue. Comment: An issue with the campus network caused brief, widespread service interruptions just after 3pm today (11/27/18). That issue has been resolved. Comment: An issue with the campus network caused widespread service interruptions just after 3pm today (11/27/18). We believe that issue is resolved or resolving. Comment: Campus wireless networks experienced a brief service degradation midday on Wednesday, Nov. 14. From about 12:05pm to 12:20pm, people may have had trouble connecting to UO Secure or eduroam. People who were already connected during that time should not have experienced any issues. Comment: Twice today (Tuesday, Nov. 13), UO Secure experienced unplanned service outages. First, it was unavailable from about 6:45am to 7:30am. Later, it was unavailable for about 5 minutes at 12:05pm. If you continue to have problems, please contact the Technology Service Desk through the UO Service Portal at https://service.uoregon.edu or by phone at 541-346-4357. Comment: We received reports of UO Secure being unavailable starting around 12:05pm today (Tuesday, Nov. 13). Many people reported that it started working for them again around 12:10pm. If you're continuing to have problems, please contact the Technology Service Desk through the UO Service Portal at https://service.uoregon.edu or by phone at 541-346-4357. ********** Separately, UO Secure was unavailable from about 6:45am to 7:30am today. Comment: UO Secure was unavailable from about 6:45am to 7:30am today (Tuesday, Nov. 13). Service has now been restored. Comment: We have received a report that Lawrence Hall has lost wired networking (i.e. Ethernet). Staff are investigating. Comment: If you had trouble connecting to UO Secure or other services earlier today, please try again now. Your access should now be restored. If you have any questions, please contact the Technology Service Desk through the UO Service Portal at https://service.uoregon.edu or by phone at 541-346-4357. Comment: We're receiving reports from many users who are unable to connect to UO Secure. Other people are able to connect. Staff are working on the issue. Comment: We have received reports that users who changed their passwords today (Oct. 25) are not able to connect to UO Secure. Staff are investigating. Comment: Some users may not be able to connect to WiFi. (They are not receiving IP addresses.) Staff are working to restore service. Comment: We believe wireless network service has been restored. If you continue to have problems, please contact the Technology Service Desk at 541-346-4357 (M-F 8am-7pm) or use the "Report an outage" form in the UO Service Portal (https://service.uoregon.edu). Comment: Wireless services on campus are experiencing a service degradation. Some people are unable to connect. Staff are working on the issue. Comment: Wireless services on campus are experiencing a service degradations. Some people are unable to connect. Staff are working on the issue. Comment: Earlier today, UO Guest was failing to send new users the expected email and text messages containing their passwords, meaning they were unable to connect to the UO Guest wireless network. It is now working again. If you had this problem earlier, please try creating a new account now. Comment: We're receiving reports that the UO Guest wireless network is experiencing a service degradation. Specifically, UO visitors trying to create new UO Guest accounts are not receiving the expected email or text message containing their password, meaning they are unable to connect to the UO Guest wireless account. Staff are working on the issue. Comment: We've received a report of the UO Guest wireless network for visitors not working at the White Stag Block in Portland. Staff are working on the issue. Comment: We believe the UO Guest wireless network is working again. If you continue to have problems, please report them through the "Report an outage" link in the UO Service Portal (https://service.uoregon.edu). Comment: The UO Guest wireless network for UO visitors is currently not allowing people to log in. Staff are working on the issue. Comment: On Wednesday, Sept. 5, from 3am to 7am, campus wireless network services may experience intermittent brief outages due to maintenance work. Devices in residence halls that require manual network registration (such as many game consoles and smart TVs) may be unable to connect to the campus network for similar brief periods during the maintenance. Comment: A campuswide outage of UO Secure was resolved around 4:40pm today (Saturday, Aug. 25). Services should all be functional again. If you continue to experience problems, please report them through the "Report an outage" page in the UO Service Portal: https://service.uoregon.edu. Comment: The UO Secure wifi network is currently down throughout campus. Staff are working to restore service. We believe the uowireless and UO Guest networks are functional and can serve as a temporary workaround for UO students and employees. Comment: Some wireless network services on campus are down. Staff are working on the issue. Comment: Some wireless network services on campus are unavailable. Staff are working on the issue. Comment: WiFi is down in Oregon Hall. Staff are working on the issue. Comment: On Tuesday morning, Aug. 21, we received some reports of UO Secure instability in parts of Deschutes Hall. If you're in Deschutes and are experiencing issues with UO Secure, please report those issues through the "Report an outage" page in the UO Service Portal at https://service.uoregon.edu or call the Technology Service Desk at 541-346-4357. Comment: We've received reports of intermittent difficulties with UO Secure in Deschutes Hall. Staff are working on the issue. Comment: We believe UO Secure is available once again after a brief period of not allowing connections. If you continue to have problems connecting to UO Secure, please report them through the "Report an outage" form in the UO Service Portal (https://service.uoregon.edu) or by calling the Technology Service Desk at 541-346-4357. Comment: We're receiving reports of people being unable to connect to the UO Secure wireless network. Staff are investigating. Comment: Lokey Education was not affected by WiFi outages as was earlier reported. Comment: We have received a report that Lokey Education has lost WiFi service. Staff are investigating. Comment: The UO Secure, UO Guest, and eduroam networks were unavailable in at least some parts of campus this morning. We believe service has now been fully restored. Comment: We're receiving reports of wireless being down in some parts of campus. The UO Secure, UO Guest, and eduroam networks are affected. Staff are working on the issue. Comment: UO Secure is unavailable in some parts of campus. Staff are working on the issue. Comment: A power outage in the Earl complex this morning caused wired and wireless network services to be unavailable in Sheldon Hall for about 25 minutes. Services have now been restored. Comment: UO Secure was unavailable in some campus buildings from 6:49am to 7:55am today (Friday, July 20). Staff are investigating the cause of this outage. Comment: UO Secure was unavailable in some buildings on campus this morning. We believe this issue has now been resolved. If you continue to have problems, please report them through the UO Service Portal: https://service.uoregon.edu/TDClient/Requests/ServiceDet?ID=19096. Comment: We're receiving reports from several buildings on campus that UO Secure is unavailable. Staff are working on the issue. Comment: We have reports from several buildings on campus about UO Secure being unavailable this morning. Staff are working on the issue. Comment: We have reports from two buildings (the Computing Center and Rainier) that UO Secure is unavailable this morning. Staff are investigating. Comment: Users in the Computing Center are not able to connect to UO Secure. Staff are investigating. Comment: Wireless service in McClure Hall was interrupted between 10:30am and 11:00am today (June 1) due to an equipment failure. Comment: The wireless network at the McClure building in the Earl Complex is unavailable. Staff are working on the issue. Comment: The wireless network at the Earl Mcclure building is unavailable. Staff are working on the issue. Comment: Pine Mountain Observatory is experiencing a network service outage. Staff are working on the issue. Comment: In the early morning hours of Saturday, May 19, from midnight until 7am, wireless may be affected multiple times during work on the campus network as part of a large-scale network redesign and upgrade project. Network traffic within campus will not be affected. Only network traffic to and from campus (e.g., people on campus trying to access the internet, or people off-campus trying to access campus websites) may be interrupted. In the best case, no interruptions will occur. In the worst case, the university will experience three interruptions of up to one hour each during this work period. Comment: A power outage around 1:25pm today (Friday, May 4) affected part of Eugene, including UO campus. Network services recovered quickly in most campus buildings. Staff have been working to replace a few pieces of failed network hardware. If you have any questions, please contact the Technology Service Desk through the UO Service Portal at https://service.uoregon.edu or by phone at 541-346-4357 (M-F 8am-7pm). Comment: A power outage around 1:25pm today (Friday, May 4) affected part of Eugene, including UO campus. Network services have been recovering since then, but may take longer to recover in some campus buildings than in others. Comment: Network services across campus are recovering after a brief power outage around 1:25pm. Comment: UO Secure is currently unavailable and staff are working to restore service. Comment: Staff have confirmed reports that UO Secure is not working in the Computing Center. They are working to restore service. The short-term workaround is to use "uowireless". Comment: We're receiving some reports of issues with wireless. Staff are investigating. Comment: If you experience any issues with wireless on campus, please report them through the UO Service Portal at https://service.uoregon.edu. Click the "Report an outage" link on the homepage, then click the green "Report Outage" button, log in with your Duck ID, complete the form, and click the green "Request" button. You can also contact the Technology Service Desk by phone at 541-346-4357. Comment: To address an ongoing wireless service degradation in some buildings on campus, staff will be performing emergency maintenance around 10pm TONIGHT (Tuesday 5/1/18) that will cause wireless network services to be unavailable in those locations for about 15 minutes each. The following buildings are affected: College of Education complex (HEDCO, Lokey Education, Education Annex, and Clinical Services), Computing Center, Deady Hall, Frohnmayer Music Building, Gerlinger Hall, Gerlinger Annex, Knight Library, Lillis Business Complex, McKenzie Hall, PLC Hall. If you have any questions, please contact the Technology Service Desk at 541-346-4357 (available 8am-7pm). Comment: Some wireless users are reporting intermittent difficulty reaching off-campus websites. The following buildings are affected: College of Education complex (HEDCO, Lokey Education, Education Annex, and Clinical Services), Computing Center, Deady Hall, Frohnmayer Music Building, Gerlinger Hall, Gerlinger Annex, Knight Library, Lillis Business Complex, McKenzie Hall, PLC Hall. Staff are working on the issue. Comment: Wireless network services were restored around 3:25pm to the buildings affected by today's service degradation (4/30/18). Comment: Wireless network services are currently degraded in several buildings on campus. We're receiving reports of slow connections and intermittent connectivity. The affected buildings are: the Computing Center, Deady Hall, Frohnmayer Music Building, Gerlinger Hall, Gerlinger Annex, Knight Library, Lillis Business Complex, McKenzie Hall, PLC Hall, and all buildings in the College of Education complex (HEDCO, Lokey Education, Education Annex, and Clinical Services). Staff are working on the issue. Comment: Wireless network services are currently degraded in several buildings on campus. We're receiving reports of slow connections and intermittent connectivity. The affected buildings are: the Computing Center, Deady Hall, Frohnmayer Music Building, Gerlinger Hall, Gerlinger Annex, Knight Library, Lillis Business Complex, McKenzie Hall, PLC Hall, and all buildings in the College of Education complex (HEDCO, Lokey Education, Education Annex, and Clinic Services). Staff are working on the issue. Comment: Wireless network services are currently degraded in several buildings on campus, including Knight Library, Lillis Business Complex, Gerlinger Hall, and the Computing Center. We're receiving reports of slow connections and intermittent connectivity. Staff are working on the issue. Comment: Network services in Klamath Hall have been restored. Comment: People in Klamath Hall are having difficulty connecting to network services in the building. Staff will be performing maintenance from about 11:30am to 11:50am to address that issue. The network will be unstable during that maintenance. Comment: Some people in Streisinger Hall are currently without network services due to a partial power outage caused by unrelated facilities maintenance. That maintenance is expected to be complete by about 5pm today (Tuesday 3/27). Comment: Some people in Streisinger Hall are currently without network services. Staff are working on the issue. Comment: Some users have not been able to connect to UO Secure this morning (3/27). You may receive a username and password prompt. Staff are looking into the issue. Comment: Some people on campus may be experiencing intermittent network issues at the moment. Staff are working on the issue. Comment: Network services are intermittent at the University Health Center again today (Friday, March 16). Staff are working on the issue. Comment: Network services are intermittent at the University Health Center. Staff are working on the issue. Comment: Wireless service is currently unavailable at the University Health Center. Staff are working on the issue. Comment: We have received reports that Columbia Hall has lost power, which has resulted in the loss of Wireless in that building. Comment: We have received reports that Columbia Hall has lost wireless access. Staff are working to restore service. Comment: UO Guest is experiencing a service degradation. Staff are working on the issue. Comment: We have received reports that the Hatfield Dowlin Complex has lost wireless access. Staff are working to restore service. Comment: As of about 3:50pm on Thursday, Feb. 1, UO Guest is once again sending passwords to new UO Guest users. Comment: UO Guest is once again failing to send passwords to new users who set up guest accounts, preventing those users from connecting to the network. Staff are working on the issue. Comment: As of 9:18am, UO Guest is once again sending passwords to new UO Guest users. Comment: UO Guest is failing to send passwords to new users who set up guest accounts, preventing those users from connecting to the network. Staff are working on the issue. Comment: As of 1:57pm today (1/19/18), UO Guest is once again sending passwords to new UO Guest users. Comment: UO Guest is failing to send passwords to new users who set up guest accounts, preventing those users from connecting to the network. Staff are working to restore service. Comment: As of 5:55pm (1/10/18), UO Guest is again sending confirmation messages to new UO Guest users. Comment: UO Guest is failing to send confirmations when users set up a guest account. Staff are working to restore service. Comment: The UO Guest wireless network is once again sending out password notifications as intended for new accounts. Comment: We're receiving reports of problems with the UO Guest wireless network. People are reporting that they can create accounts but aren't getting their passwords sent to them. Staff are working on the issue. Comment: UO Guest accounts created between 12/3/17 and 10:00am today (12/5/17) may generate a "certificate error" when users log in. All other accounts should work properly. To work around the certificate error message, click or tap the "Accept" or "Continue" button to accept the certificate OR create a new UO Guest account. Comment: UO Guest may have experienced a brief outage (~5 minutes) at 9:30am. Comment: UO Guest is experiencing a brief outage (~5 minutes) now so that staff can fix a problem that is causing problems for users attempting to log in to UO Guest. Comment: UO Guest will experience a brief outage (~5 minutes) between 9:30am and 10:00am so that staff can fix a problem that is causing problems for users attempting to log in to UO Guest. Comment: New UO Guest accounts can now be created, and users with existing UO Guest accounts can log in. Some users with guest accounts generated since midnight last night may receive a certificate error when attempting to log in. Staff are working to resolve this issue. Comment: Currently, users cannot create new accounts on the UO Guest network. Attempting to do so results in an error message. Guest users who already have accounts are able to log in and use guest wireless. Staff are working to resolve the problem. Comment: We have received several reports that new UO Guest wireless accounts cannot be created. However, users that have already created accounts can log in to UO Guest. Staff are investigating. Comment: During planned maintenance work this morning, the university's network was unexpectedly unavailable between approximately 6:09am and 6:22am. Comment: We have received reports that wireless on all floors of the Thompson University Center (720 E 13th Ave) is running slowly. Staff are investigating. *** Separately, most of campus experienced an outage with UO Secure this morning (11/16) where people were not able to connect to UO Secure. That issue was resolved around 8:00am. Comment: We have received reports that UO Secure may be unavailable for some users. Staff are working on the issue. To workaround this issue, connect to the uowireless network. Comment: We have received reports that the network on all floors of the Thompson University Center (720 E 13th Ave) is running slowly. Staff are investigating. Comment: In the Jaqua Academic Center, wireless is completely unavailable on the first floor and partially unavailable on the second floor. Staff are working on the issue. Comment: Full wireless service at UO Bend was restored over the weekend. Comment: A power surge at UO Bend has resulted in the hardware failure of one wireless device. Wireless coverage in UO Bend is reduced until a replacement is installed. That work is scheduled for Saturday, Nov. 11. Comment: A power surge at UO Bend has resulted in the hardware failure of one wireless device. Wireless coverage in UO Bend is reduced until a replacement is installed. Comment: We have received a report that a power surge in Bend, OR has resulted in a partial outage to the UO Secure network in that location. Staff are investigating. Comment: We have received a report that a power surge in Bend, OR has resulted in an outage with the UO Secure network in that location. Comment: Staff have restored UO Secure and UO Guest. Users are connecting to that wireless network. Staff are completing their work and monitoring the service. Comment: Staff have restored UO Secure and users are connecting to that wireless network. Work will continue as staff monitor the service. Additional adjustments to this service may be necessary to insure stability and functionality. The UO Guest network will be unstable for another 30 minutes as a result of the work done to restore UO Secure. Comment: Staff have restored UO Secure and users are connecting to that wireless network. Work will continue as staff monitor the service. Additional adjustments to this service may be necessary to insure stability and functionality. Comment: Users may receive the message "Unable to connect to UO Secure" when attempting to connect to UO Secure. To work around this issue temporarily, connect to "uowireless" or "eduroam". (For eduroam, your username is your full UO email address.) Staff are working to implement a solution to restore service. Comment: Between 3:51pm and 6:58pm on Thursday (10/26), users were not able to connect to UO Secure. Users should be able to connect to that wireless network now. Comment: Users may receive the message "Unable to connect to UO Secure" when attempting to connect to UO Secure. Staff are working to restore service. Note that if you are already connected to UO Secure then you will not be affected unless you move locations. Comment: We're receiving reports from some people of issues connecting to UO Secure. Staff are working on the issue. People who are currently connected may not be experiencing issues. Comment: Ethernet and wireless network services in Pacific Hall have been intermittent today (Friday, Oct. 6), and are expected to continue being intermittent throughout the weekend due to continuing electrical work. Comment: At 12:17pm today, staff addressed an issue that made it difficult for some people to join UO Secure and uowireless. Users should be able to connect to wireless more reliably now. Staff continue monitoring wireless network services. Comment: At 12:17pm today, staff addressed an issue that would make it difficult to join UO Secure and uowireless. Users should be able to connect to that service more reliably now. Comment: Some users have reported that their connection to Wireless drops and sometimes it is difficult to reconnect. Staff are investigating. Comment: Some users report that Wireless access drops out and it can be difficult to reconnect. Staff are working on resolving the issue. Comment: We're receiving reports of wireless being slow or unavailable. Staff are working on the issue. Comment: Ethernet and wireless are available again at both Global Scholars Hall in Eugene and the White Stag Block and Oregon Executive MBA program in Portland after separate outages this morning (Wednesday, Sept. 20). Comment: Ethernet and wireless are unavailable for the Global Scholars Hall. Staff are working on the issue. Ethernet and wireless are available again at the White Stag Block and Oregon Executive MBA program in Portland. Comment: Ethernet and wireless are mostly available again at the White Stag Block and Oregon Executive MBA program in Portland. Comment: Ethernet and wireless are completely unavailable at the White Stag Block and Oregon Executive MBA program in Portland. Staff are working on the issue. Comment: Network services were restored for the UO Motor Pool around 2:40pm today (Friday, Sept. 1), and for the UO Bend Center around 3:20pm. We believe all related network service outages have now been resolved. If you have any questions or continue to have problems, please contact the Technology Service Desk through the new UO Service Portal (service.uoregon.edu) or by phone at 541-346-4357 (Monday-Friday, 8am-5pm). Comment: We have just received reports of network problems at the UO Bend Center. Staff are investigating. Network services continue to be unavailable for the UO Motor Pool. Staff are working to restore service there. If you have any questions, please contact the Technology Service Desk through the new UO Service Portal (service.uoregon.edu) or by phone at 541-346-4357 (Monday-Friday, 8am-5pm). Comment: Network services have been restored for Pine Mountain Observatory. Network services continue to be unavailable for the UO Motor Pool. Staff will resume work on that network on Friday morning (Sept. 1). If you have any questions, please contact the Technology Service Desk through the new UO Service Portal (service.uoregon.edu) or by phone at 541-346-4357 (Monday-Friday, 8am-5pm). Comment: Network services for the UO Motor Pool and Pine Mountain Observatory have been unavailable since about 11:45am today (Thursday, Aug. 31). Staff continue working to restore service. If you have any questions, please contact the Technology Service Desk by phone at 541-346-4357 or through the new UO Service Portal (service.uoregon.edu). Comment: Network services have been restored to the UO Bend Center and Oregon Institute of Marine Biology. Network services to the UO Motor Pool remain unavailable and have been down since about 11:45am today (Thursday, Aug. 31). Staff continue working to restore service. If you have any questions, please contact the Technology Service Desk by phone at 541-346-4357 or through the new UO Service Portal (service.uoregon.edu). Comment: As of about 11:48am today (Thursday, Aug. 31), network services are unavailable again at sites such as the UO Bend Center, Oregon Institute of Marine Biology, and UO Motor Pool. Staff are working to restore service to those locations. Network services were briefly unavailable again for all UO locations from about 11:45am to 11:48am. If you have any questions, please contact the Technology Service Desk by phone at 541-346-4357 or through the new UO Service Portal (service.uoregon.edu). Comment: Wireless is unavailable due to a networking configuration issue. Staff are working to restore service. Comment: Earlier this morning (Monday, Aug. 28), an issue with the network may have prevented campus visitors from registering for the UO Guest wireless network. That issue was resolved around 11:30am. Note: People are currently unable to use @uoregon.edu email accounts to register for the UO Guest network. For IT staff trying to test UO Guest, please use a non-UO email address. For alternative workarounds, please contact the Technology Service Desk at 541-346-4357 or through the new UO Service Portal (service.uoregon.edu). Comment: We have received reports of campus visitors being unable to register for the UO Guest wireless network. Staff are working on the issue. Comment: An issue with the network may have prevented people from registering for the UO Guest wireless network earlier this morning (Monday, Aug. 28). That issue was resolved around 11:30am. Comment: Network traffic between UO campus and the rest of the world is currently being dropped intermittently. Staff will be performing emergency network maintenance until about 2:35am on Monday, Aug. 28, to address this. Comment: In the early morning hours of Friday, August 25, from midnight until 7am, wireless may be affected multiple times during work on the campus network as part of a large-scale network redesign and upgrade project. Network traffic within campus will not be affected. Only network traffic to and from campus (e.g., people on campus trying to access the internet, or people off-campus trying to access campus websites) may be interrupted. In the best case, no interruptions will occur. In the worst case, the university will experience three interruptions of up to one hour each during this work period. Comment: In the early morning hours of Friday, August 25, from midnight until 7am, SERVICE may be affected multiple times during work on the campus network as part of a large-scale network redesign and upgrade project. Network traffic within campus will not be affected. Only network traffic to and from campus (e.g., people on campus trying to access the internet, or people off-campus trying to access campus websites) may be interrupted. In the best case, no interruptions will occur. In the worst case, the university will experience three interruptions of up to one hour each during this work period. Comment: Wireless service at Oregon Institute of Marine Biology (OIMB), UO Bend, UO Motor Pool,Pine Mountain Observatory is unavailable. Staff are working on the issue and expect to have service restored later this afternoon. Comment: Wireless service at Pine Mountain Observatory is unavailable. Staff are working on the issue. Comment: Wireless services to Pine Mountain Observatory are unavailable. Comment: When Firefox users attempt to load wireless.uoregon.edu they receive a message that a security certificate has expired. Other browsers, such as Chrome and Safari, are working. Staff are working to resolve the issue. Users may choose to work around the error by acknowledging the error (usually by clicking the "Continue" button). Comment: When users attempt to load wireless.uoregon.edu they receive a message that a security certificate has expired. Staff are working to resolve the issue. Users may choose to work around the error by acknowledging the error (usually by clicking the "Continue" button). Comment: The network (Ethernet and wireless) at the Center for Educational Medical Research (CMER) building has been unavailable since PeaceHealth did a generator test on Saturday, Aug. 12. UO staff are working with PeaceHealth to restore network service to the building. Comment: Campus wireless networks will be intermittently unavailable from 11:30pm on Wednesday, July 26, through the early morning hours of Thursday, July 27 — possibly as late as 7am — while staff perform maintenance. More info: http://around.uoregon.edu/content/network-maintenance-will-result-overnight-wifi-interruption. Comment: Staff have implemented a temporary fix to the Ethernet and wireless service degradation in Deschutes Hall that we reported earlier, and will soon be implementing a durable fix. Comment: We have received reports that Wireless service in Deschutes Hall is experiencing performance degradation. Staff are working to resolve the issue. Comment: We have received reports that wireless service is not working in Deschutes Hall. Staff are working to resolve the issue. Comment: Agate Hall and some of the surrounding buildings have lost power, so Wireless is not available in those locations until power is restored. Comment: Parts of Klamath Hall may be without Wireless. Staff are working to restore that service. Comment: Allen Hall's wireless service is unavailable due to a power outage. Service will be restored when power is restored to the building. Comment: The network in Allen Hall is experiencing intermittent network outages. This affects the entire building EXCEPT the Allen Hall Data Center. Staff are investigating. Comment: The network in Allen Hall is experiencing intermittent complete network outages. Staff are investigating. Comment: We're receiving reports of intermittent network disconnections on the Eugene campus. Staff are working on the issue. Comment: The network outage we reported earlier seems to have been resolved around 8pm (Sunday, April 16). It appears to have been more limited in scope than staff initially suspected. Comment: The Eugene campus is experiencing a widespread network outage. Staff are investigating. We will provide more information when it becomes available. Comment: At Knight Law School, network services (wireless and Ethernet) have been restored. Staff continue troubleshooting some lingering issues with IP phones. Comment: We've received reports that network services (both wireless and Ethernet) are unavailable at Knight Law School. Staff are working on the issue. Comment: We've received reports that the network (both wireless and Ethernet) is unavailable at Knight Law School. Staff are working on the issue. Comment: Service has been restored to Spencer View. Comment: Spencer View is without wireless service after a power outage this morning. A key component of the network failed to start when power returned. Staff are working to replace that equipment and restore service. Comment: We have received reports that the network is unavailable for Spencer View Apartment residents. Staff are working on the issue. Comment: Based on multiple reports of slow network speeds, staff are working to resolve the issue. As a result of this work, all users in the residence halls/student housing will experience a complete network outage some time between 9:00pm and 10:00pm. It is expected to last about 20 minutes. Comment: Based on multiple reports of slow network speeds, staff are working to resolve the issue. A portion of campus will experience up to 20 minutes of unstable network access some time between 8:50pm and 9:10pm. All students in the residence halls/student housing will experience a complete network outage some time between 9:00pm and 10:00pm. Comment: Based on multiple reports of slow network speeds, staff are working to resolve the issue. All students in the residence halls/student housing and some portions of campus will experience up to 20 minutes of unstable network access some time between 8:00pm and 9:00pm. Comment: We have received reports that the university's network is running slowly. Staff are working on the issue, and that work caused a brief network outage at 7:05pm. Staff continue to work to restore service. Comment: We have received reports that the university's network is running slowly. Staff are working on the issue, and that work may cause a brief network outage at 6:35pm. Comment: We have received reports that the university's network is running slowly. Staff are working on the issue. Comment: Network services have been restored at all of the locations affected by this evening's power outage (Sunday, Feb. 26). Some buildings were without power from about 6pm to 8:10pm. Others had power again by about 6:35pm. Comment: Network services are currently unavailable in many buildings on the west side of the Eugene campus due to a power outage that started around 6pm. Affected buildings include the following, and possibly others: Allen (except data center), Bean East, Collier House, Deady, Education Annex, Friendly, Gerlinger, Gerlinger Annex, Hendricks, Johnson, Jordan Schnitzer Museum of Art, Lawrence, Lillis Complex, Music, Susan Campbell, Villard. Comment: Network services are unavailable in many buildings on the west side of the Eugene campus due to a power outage. Details to follow. Comment: The wireless network is in the process of recovering after a campuswide wireless outage that started around 12:10pm. Wireless is working again in most parts of campus but is still unavailable or unstable in others. Staff continue working on the issue. Comment: The wireless network is in the process of recovering after a campuswide wireless outage that started around 12:10pm. Wireless is working again in most parts of campus but not yet in others. Staff continue working on the issue. Comment: The wireless network is currently unavailable on the Eugene campus. Staff are working on the issue. Comment: Network services are currently unavailable at 1511 Moss Street due to a power outage. Comment: Network services are currently unavailable in two parts of campus: 1) Part of the Lillis Business Complex (staff are working on the issue); 2) Several buildings on east side of campus due to a power outage: Graduate Village, Romania Building, 1511 Moss St, 1460 Villard St, and UO Motor Pool. Comment: Network services are currently unavailable in two parts of campus: 1) Part of the Lillis Business Complex. Staff are working on the issue. 2) Several buildings on east side of campus due to a power outage. Affected buildings include Graduate Village, Romania Building, 1511 Moss St, 1460 Villard St, and UO Motor Pool. Comment: Ethernet and wireless are unavailable in part of the Lillis Business Complex. Staff are working on the issue. Comment: The wireless network is available again on the Eugene campus after being unavailable campus wide for about 15-20 minutes. Comment: Wireless may be unavailable on parts of campus. Staff are working on the issue. Comment: The wireless network is available again on the Eugene campus after being unavailable campuswide for about 15-20 minutes. Comment: We're receiving reports of wireless network service degradations. Staff are working on the issue. Comment: Ethernet and wireless are available once again at the Moss Street Children's Center. They are still unavailable at the Central Kitchen. Staff are working on the issue. Comment: Ethernet and wireless are currently unavailable at the Moss Street Children's Center and in the Central Kitchen. Staff are working on the issue. Comment: Ethernet and the wireless network are currently unavailable at the Moss Street Children's Center and in the Central Kitchen. Staff are working on the issue. Comment: Wireless on the Eugene campus is available again after being partially unavailable between about 2:05pm and 2:50pm on Monday 1/30. Comment: Wireless on the Eugene campus is experiencing technical problems. You may lose your wireless connection or have a difficult time connecting to wireless. Staff are working to restore full wireless service. Comment: We have received reports that Wireless is unavailable in parts of campus. Staff are working to restore service. Comment: From about 9am to 10am today (Thursday, Jan. 12), some users experienced network slowness on Ethernet and wireless. The service degradation has now been resolved. Comment: From about 9am to 9:50am today (Thursday, Jan. 12), some users experienced network slowness on Ethernet and wireless. The service degradation has now been resolved. Comment: Power outages cause Wireless service outages. When power is restored, Wireless service is generally also restored automatically except in situations where equipment is damaged by the power loss. That damaged equipment must be replaced by staff, and with the university closed due to the ice storm, our ability to respond safely is very limited. For more on campus closures, see https://alerts.uoregon.edu. Comment: Power fluctuations are causing sporadic network issues on campus this evening (Wednesday, Dec. 14). As of about 6:10pm, the network had been down at the Central Kitchen & Moss Children's Center since about 5pm. PLC, Hamilton, Huestis, and Spencer View have had brief network outages. Update 6:48pm: Other buildings reporting network outages include Bean, Barnhart, and the Jaqua Center. Some of these buildings may experience multiple outages. Other buildings may also be affected. Comment: Power fluctuations are causing sporadic network issues on campus this evening (Wednesday, Dec. 14). As of about 6:10pm, the network had been down at the Central Kitchen & Moss Children's Center since about 5pm. PLC, Hamilton, Huestis, and Spencer View have had brief network outages. Other buildings may also be affected. Comment: Staff are working to resolve intermittent network problems in Pacific Hall (and possibly Columbia Hall). Comment: The network is unavailable in the Atrium Building. Staff are working on the issue. Staff are also working to resolve intermittent network problems in Pacific Hall (and possibly Columbia Hall). Comment: This outage was due to schedule electrical maintenance work. Comment: The network is down at Matthew Knight Arena. Staff are investigating. Comment: Some UO wireless users may currently be unable to access the Internet. Staff are working on the issue. Comment: We have received several reports that UO Secure and 'uowireless' at the White Stag building in Portland is not allowing users to stay connected. Staff are investigating. Comment: Service has been restored to Pacific and Columbia Halls. We have reports that Wireless at Agate Hall is unavailable. Staff are working to restore service. Comment: Power has been restored to Pacific and Columbia Halls. Wireless will continue to be partially unavailable due to equipment affected by the power fluctuation. Staff are working to restore full Wireless service, and we expect to have service restored shortly. Comment: Power has been restored to Pacific and Columbia Halls. Wireless will continue to be partially unavailable due to equipment affected by the power fluctuation. Staff are working to restore full Wireless service. Comment: Power is out in Pacific and Columbia Halls. Wireless service will be restored when the power returns. Wireless was affected briefly in several campus buildings due to power fluctuations related to the lightning. Comment: Power is out in Pacific and Columbia Halls and Spencer View Housing. Wireless service will be restored when the power returns. Wireless was affected briefly in several campus buildings due to power fluctuations related to the lightning. Comment: We have received a report that wireless may have been affected by a lightning-related power surge. Staff are investigating. Comment: Wireless is working intermittently for users in Friendly Hall and Hendricks Hall. Staff are working to resolve the problem. Comment: Wireless is not working for users in Friendly Hall. Staff are working to resolve the incident. Comment: We have received reports that Wireless is not working for some users in Friendly Hall. Comment: The network has been partially unavailable in Hendricks and Friendly Halls since about 12:30pm today (Monday 10/3). Comment: The network was briefly unavailable in Hendricks and Friendly Halls around 12:30pm today (Monday 10/3). Comment: The "eduroam" network is currently unavailable. If you are visiting the university and have been using Eduroam, please use UO Guest until this issue is resolved. For more information, see Connecting to Guest Wireless at https://it.uoregon.edu/guest-wifi. Comment: he "eduroam" network is currently unavailable. If you are visiting the university and have been using Eduroam, please use UO Guest until this issue is resolved. For more information, see Connecting to Guest Wireless at https://it.uoregon.edu/guest-wifi. Comment: Wireless is unavailable in Bean East and Bean West due to a power outage. Contractors are working on power in Bean, and once they restore electricity, wireless service will be restored. Comment: The network (wired and wireless) is unavailable in the Lewis Integrative Science Building. Staff are working on the issue. Comment: Service has been restored, though Ford Alumni Center continues to be affected by this incident. Staff are working to fully restore service. Comment: Most wireless service has returned, though Ford Alumni Center and White Stag in Portland continue to be impacted by this incident. Staff are working to fully restore service. Comment: On Thursday, June 16, we switched to a new system for the UO Secure and eduroam wireless networks. The first time you connect to these networks after the change, you may have to reenter your username and password or trust a new wireless certificate. If you're unable to connect, go to http://wireless.uoregon.edu/ and walk through the steps to update your device's wireless configuration. For additional help, contact your local IT support staff or the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: From Wed. 6/15 through Fri. 6/17, between 4am and 7am, we will be performing maintenance on the wireless network. During that window, there will be rolling wireless outages of 10 to 30 minutes across the Eugene campus and for the White Stag, Bend, OIMB, and OEMBA facilities. ** *** ** *** **Separately, on Thurs. 6/16, between 5am and 7am, we will be switching to a new system for the UO Secure and eduroam wireless networks. The first time you connect to these networks after the change, you may have to reenter your username and password or trust a new wireless certificate. If you're unable to connect, go to http://wireless.uoregon.edu/ and walk through the steps to update your device's wireless configuration. For help, contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: From Wed. 6/15 through Fri. 6/17, between 4am and 7am, we will be performing maintenance on the wireless network. During that window, there will be rolling wireless outages of 10 to 30 minutes across the Eugene campus and for the White Stag, Bend, OIMB, and OEMBA facilities. ******* Separately, on Thurs. 6/16, between 5am and 7am, we will be switching to a new system for the UO Secure and eduroam wireless networks. The first time you connect to these networks after the change, you may have to reenter your username and password or trust a new wireless certificate. If you're unable to connect, go to http://wireless.uoregon.edu/ and walk through the steps to update your device's wireless configuration. For help, contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: On Thursday, June 16, between 5am and 7am, we will be switching to a new system for the UO Secure and eduroam wireless networks. The first time you connect to these networks after the change, you may have to reenter your username and password or trust a new wireless certificate. If you're unable to connect, go to http://wireless.uoregon.edu/ and walk through the steps to update your device's wireless configuration. For help, contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: At Agate Hall, the network is unavailable due to a power outage. Staff are working on the issue. Comment: UO Guest: People are getting alerts that the security certificate is invalid when they try logging in to UO Guest. This expired certificate will be updated tomorrow (Friday, May 13) between 5am and 7am. For most of that window, both UO Guest and eduroam will be entirely unavailable. The workaround in the meantime is to temporarily indicate that you trust the expired certificate. (The method for doing this varies by operating system and web browser.) You should then be able to connect to the network. *** Wireless services in the EMU have been restored. Comment: UO Guest: People are getting alerts that the security certificate is invalid when they try logging in to UO Guest. This expired certificate will be updated tomorrow (Friday, May 13) between 5am and 7am. For most of that window, both UO Guest and eduroam will be entirely unavailable. The workaround in the meantime is to temporarily indicate that you trust the expired certificate. (The method for doing this varies by operating system and web browser.) You should then be able to connect to the network. ***** Separately, wireless in the EMU is UNAVAILABLE on Thursday, May 12, from 5:00am to noon due to equipment upgrades. The timing of this work is necessary to meet the EMU's construction-related move-in schedule. Comment: Staff are working on the issue. UO Guest: People are getting alerts that the security certificate is invalid when they try logging in to UO Guest. This expired certificate will be updated tomorrow (Friday, May 13) between 5am and 7am. For most of that window, both UO Guest and eduroam will be entirely unavailable. The workaround in the meantime is to temporarily indicate that you trust the expired certificate. (The method for doing this varies by operating system and web browser.) You should then be able to connect to the network. ***** Separately, wireless in the EMU is UNAVAILABLE on Thursday, May 12, from 5:00am to noon due to equipment upgrades. The timing of this work is necessary to meet the EMU's construction-related move-in schedule. Comment: Staff are working on the issue. The UO Guest wireless network is currently unavailable. Staff are working on the issue. ***** Separately, wireless in the EMU is UNAVAILABLE on Thursday, May 12, from 5:00am to noon due to equipment upgrades. The timing of this work is necessary to meet the EMU's construction-related move-in schedule. Comment: Staff are working on the issue. The Tech Desk has received reports of users unable to connect to the UO Guest wireless network. Comment: Wireless in the EMU is UNAVAILABLE on Thursday, May 12, from 5:00am to noon due to equipment upgrades. The timing of this work is necessary to meet the EMU's construction-related move-in schedule. Comment: Wireless is unavailable in the Millrace 3 Building. Staff are working on the issue. ***** Separately, wireless in the EMU will be UNAVAILABLE on Thursday, May 12, from 5:00am to noon due to equipment upgrades. The timing of this work is necessary to meet the EMU's construction-related move-in schedule. Comment: Wireless in the EMU will be UNAVAILABLE on Thursday, May 12, from 5:00am to noon due to equipment upgrades. The timing of this work is necessary to meet the EMU's construction-related move-in schedule. Comment: Wireless is currently unavailable in Walton North. Staff are working on the issue. Workaround: use Ethernet. ***** Separately, wireless in the EMU will be UNAVAILABLE on Thursday, May 12, from 5:00am to noon due to equipment upgrades. The timing of this work is necessary to meet the EMU's construction-related move-in schedule. Comment: Wireless will be UNAVAILABLE in the EMU on Thursday, May 12 from 6:00am to noon to perform equipment upgrades. The timing of this work is necessary to meet the EMU's move-in schedule. Comment: Wireless will be UNAVAILABLE on Thursday, May 12 from 6:00am to noon to perform equipment upgrades. The timing of this work is necessary to meet the EMU's move-in schedule. Comment: The network in the EMU is down due to a power outage. Service will be restored once power returns. Comment: There is a network outage at the Casanova Building. Staff are working on the issue. Comment: The wireless network in 1715 Franklin lost connectivity for a short time at approximately 11:24am today (4/19/16). Comment: There is a partial network outage at the Atrium Building, affecting the Center on Brain Injury Research & Training (CBIRT). Staff are working on the issue. Comment: Wireless was unavailable throughout the Eugene campus starting around 8:18pm on Sunday, April 3. The actual outage was brief, but people experienced difficulty connecting to wireless again until about 8:46pm. Comment: Wired and wireless networking are partially unavailable in Willamette Hall after a brief power outage. Staff are working on the issue. Comment: Willamette Hall just lost power briefly. Network devices are in the process of coming back on. Comment: We're receiving reports that parts of the EMU have no network connectivity. Staff are working on the issue. Comment: From about 5:15am to 6:30am today (Monday 3/21), an network outage caused UO websites to be intermittently unreachable from certain off-campus Internet locations. Also, many non-UO websites were intermittently unreachable from campus during that window. Comment: On Friday, Feb. 19, between about 1pm and 3pm, people signing up for new UO Guest wireless accounts did not receive email or text notifications containing their passwords (and were therefore unable to log in). As of about 3pm, the delayed messages had been sent and new notifications were working. Comment: We've received reports of issues with a couple of UO Guest wireless account signups. Specifically, users did not receive the emails or text messages containing their passwords and were therefore unable to log in. Staff are looking into the issue. Comment: We've received reports that new users are unable to connect to uowireless and UO Secure. Existing users are able to connect. Staff are working on the issue. Comment: We've received reports of some users being unable to connect to uowireless and UO Secure. Users who are already connected are not affected. Staff are working on the issue. Comment: At the moment new users are unable to connect to UO wireless and UO secure. Comment: The guest wireless sponsorship service has changed as of January 2016. Guests: You now have a self-service option. For instructions on signing up, see the FAQ at it.uoregon.edu/guest-wireless-self-service-launch. Sponsors: The sponsors website (sponsors.uoregon.edu) was updated on Jan. 22. Step-by-step instructions for using it are at it.uoregon.edu/guest-wireless-sponsorship. If you have any questions, please contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: The guest wireless sponsorship service has changed as of January 8. Guests: You now have a new self-service option. For directions on registering yourself, see it.uoregon.edu/guest-wireless-self-service-launch. Sponsors: Until Jan. 22, you are temporarily unable to sponsor guests. Visit the above web page for details and an FAQ. If you have additional questions, please contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: The new UO Guest wireless system is not available at OIMB, OEMBA and White Stag. We plan to update these locations on 01/13/2016. The guest wireless sponsorship service has changed as of January 8. Guests: You now have a new self-service option. For directions on registering yourself, see it.uoregon.edu/guest-wireless-self-service-launch. Sponsors: Until Jan. 22, you are temporarily unable to sponsor guests. Visit the above web page for details and an FAQ. If you have additional questions, please contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: The new UO Guest wireless system is not available at OIMB, OEMBA and White Stag. We plan to update thiese locations on 01/13/2016. The guest wireless sponsorship service has changed as of January 8. Guests: You now have a new self-service option. For directions on registering yourself, see it.uoregon.edu/guest-wireless-self-service-launch. Sponsors: Until Jan. 22, you are temporarily unable to sponsor guests. Visit the above web page for details and an FAQ. If you have additional questions, please contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: Staff are working on the issue. The new UO Guest wireless system is not available at OIMB, OEMBA and White Stag. We plan to update thiese locations on 01/13/2016. The guest wireless sponsorship service has changed as of January 8. Guests: You now have a new self-service option. For directions on registering yourself, see it.uoregon.edu/guest-wireless-self-service-launch. Sponsors: Until Jan. 22, you are temporarily unable to sponsor guests. Visit the above web page for details and an FAQ. If you have additional questions, please contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: On Friday, January 8, the guest wireless sponsorship service will be changing. Guests will have a new self-service option (starting at 6:30am that day), and sponsors will temporarily be unable to sponsor guests for a few weeks (starting at 5:00am). Sponsors and guests should visit it.uoregon.edu/guest-wireless-self-service-launch for details and an FAQ. If you have additional questions, please contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: On Friday, January 8, the guest wireless sponsorship service will be changing. Guest will have a new self-service option, and sponsors will temporarily be unable to sponsor guests. Sponsors and continuing guests should visit it.uoregon.edu/guest-wireless-self-service-launch for more information. Comment: On January 8, 2016, the guest wireless sponsorship service will be changing. Guest will have a new self-service option, and sponsors will temporarily be unable to sponsor guests. Sponsors and continuing guests should visit it.uoregon.edu/guest-wireless-self-service-launch for more information. Comment: Some people using VPN (virtual private networking) are reporting having their sessions suddenly ended and getting an error message that says "untrusted VPN server certificate." Staff are working on the issue. Comment: Between about 4am and 8:45am today (Dec. 10), some people may have experienced intermittent issues while using the wired and wireless networks. Comment: Between 5pm and 7pm on Thursday, Dec. 3, we received reports of wireless issues from several places on campus. If you are still experiencing wireless issues, please email the Technology Service Desk at techdesk@uoregon.edu, or call 541-346-4357 on Friday, Dec. 4, between 8am and 7pm. Comment: Staff are working to resolve the issues with wireless. Comment: We have received reports of wireless issues from several locations around campus including some dorms. Comment: We have received reports of wireless issues in Bean East and the Knight Law Library. Comment: We have received reports of possible wireless issues in Bean East and the Knight Law Library. Comment: The uoguest and eduroam wireless networks are experiencing intermittent connectivity. Staff are working on the issue. Comment: Wireless may be intermittent from now until 6:15pm today (Nov. 4) while staff perform maintenance. Comment: UO Secure, uowireless, UO Guest, and eduroam wireless networks experienced an outage around 10:50am today (11/4/15) that lasted about 10 minutes. Comment: We have received reports of wireless problems in Lillis Hall, McKenzie Hall, the Computing Center, and 1715 Franklin Blvd. Staff are looking into the issue. Comment: We have received reports of wireless problems in Lillis Hall. Staff are looking into the issue. Comment: We have received reports of wireless problems in Lillis Hall. Staff are working to restore service. Comment: Some users in the northwest portion of campus are experiencing difficulties connecting to the uowireless wireless network. After they enter their credentials, the connection seems to hang. Staff are working on the issue. Workaround: Wait a few minutes and try again, or use UO Secure (see: https://it.uoregon.edu/uo-secure). Comment: If you're experiencing problems with wireless, please call the Tech Desk at 541-346-4357 to report your location and other details. Some users in McKenzie Hall are unable to connect to the uowireless wireless network. After they entered their credentials, the connection seems to hang. Workaround: use UO Secure (see: https://it.uoregon.edu/uo-secure). Comment: If you're experiencing problems with wireless, please call the Tech Desk at 541-346-4357 to report your location and other details. Around 5pm on Oct. 9, some users in McKenzie Hall are unable to connect to the uowireless wireless network. After they entered their credentials, the connection seemed to hang. Workaround: Try another location, or if you've already set up UO Secure on your computer, try that network (see: https://it.uoregon.edu/uo-secure). Comment: Some users in McKenzie Hall are unable to connect to the UO Wireless network. After entering their credentials, the connection seems to hang. Workaround: If you have already set up your computer to use UO Secure, then use that network. (UO Secure is the preferred wireless network for UO students, faculty, and staff: https://it.uoregon.edu/uo-secure.) Or try another location. Comment: Some users have reported difficulties connecting to the uowireless wifi network. At this time other UO wifi networks appear to be available and unaffected by any issues. Comment: The network is currently unavailable in the Living Learning Center North building. Staff are working on the issue. Comment: The Oct. 8 wireless issues appear to be resolved as of about 3:10pm. If you continue to have problems, please contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: The Oct. 8 wireless issues to be resolved as of about 3:10pm. If you continue to have problems, please contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: Some locations on campus continue to report issues with wireless. Staff are working on the issue. Comment: Earlier today, Oct. 8, multiple locations on campus experienced intermittent wireless outages. As of 12pm, staff believe those outages have been resolved. If you continue to have problems, please contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: The wireless network is experiencing a service degradation. Specifically, it is intermittently slow or unavailable for some users. Staff are working on the issue. Comment: Multiple locations experienced intermittent outages to the wired and wireless networks this morning between about 7:30am and 8:15am. The issue is now resolved. If you continue to have problems, please contact the Technology Service Desk (techdesk@uoregon.edu; 541-346-4357). Comment: Multiple locations experienced intermittent outages to the wired and wireless networks this morning between about 7:30am and 8:15am. The issue is now resolved. Comment: Some users are receiving an error when trying to connect to UO Secure. The error message says: "We failed to verify that the connection is working properly. Click, 'Retry' to try again. Click, 'Skip' to proceed without verification." If you are experiencing this error message, please click "Skip" to proceed with the connection. Staff are working to resolve the issue. Comment: We are receiving reports that some users are receiving errors when connecting to UO Secure. The error message says: "We failed to verify that the connection is working properly. Click, 'Retry' to try again. Click, 'Skip' to proceed without verification." If you are experiencing this error message, please click "Skip" to proceed. Staff are also working to resolve the issue. Comment: Staff are working on the issue. We are receiving reports that some users are receiving errors when connecting to UO Secure. The error message says: "We failed to verify that the connection is working properly. Click, 'Retry' to try again. Click, 'Skip' to proceed without verification." If you are experiencing this error message, just click "Skip" to proceed. Comment: UO's connection to the Internet was restored at 11:35am thanks to the work of UO's Internet service providers. Comment: Some off-campus websites and services are not responding due to a network issue with a key Internet service provider for the University of Oregon and Oregon State University. They are making adjustments to restore service. On-campus services appear to be working, especially for people located on campus. Comment: Some off-campus websites and services are not responding due to a network issue with a key Internet service provider for the University of Oregon. They have made adjustments to restore service. On-campus services appear to be working, especially for people located on campus. Comment: Some off-campus websites and services are not responding. On-campus services appear to be working. Staff are engaged and are working to restore service. Comment: The northeast quarter of campus will have an increased chance of brief wireless outages on Tuesday, July 14 between 6am and 7am. The northwest and southwest quarters of campus will also have an increased chance of brief wireless outages on Wednesday, July 15 between 6am and 7am. While we do not expect wireless to be unavailable, there is an increased chance of brief outages lasting about 10 minutes as we complete wireless work. Comment: Some attendees of the Overseas Association for College Admission Counseling (OACAC) conference are reporting difficulty connecting to the UO Guest wireless network. If you are having problems, please contact the Technology Service Desk at (541) 346-HELP (4357) or techdesk@uoregon.edu. Comment: Some attendees of the Overseas Association for College Admission Counseling (OACAC) conference are reporting difficulty connecting to the UO Guest wireless network. If you are having problems, please contact techdesk@uoregon.edu. Comment: UO Guest issues: UO Guest experienced issues that prevented wireless users from accessing it starting late last evening (6/24/15). Service was restored at ~ 7:30 AM this morning, 6/25/15. Comment: Users in Lawrence Hall are experiencing some network slowness. Staff are working on the issue. Comment: Wireless is currently unavailable at Carson Hall due to a power outage. UO's Central Power Station is working to restore power. Comment: SUBJECT: Wireless work - Maintenance AFFECTED: 802.11b/g (2.4GHz) wireless clients, NE campus quadrant STATUS: Planned START TIME: Fri, 6/5/15 6:00 AM END TIME: Fri, 6/5/15 6:15 AM DESCRIPTION: 802.11b/g (2.4GHz) wireless clients in the NE quadrant of campus will be briefly interrupted during tomorrow morning's maintenance window. Clients that support 802.11a/n/ac (5GHz) and are within range should transition over automatically during the work. Comment: We have received multiple reports of impact to wifi on campus. Some people are unable to load login screens. Some are connected but performance is heavily impacted. Staff is engaged and working towards resolution. Comment: EDUROAM is not working for anyone who is logging in with accounts from other institutions. Staff are working with EDUROAM to resolve the issue. The work-around is to have the guest of the university get sponsored for wifi access, then use the UO Guest network. Comment: We are receiving reports of problems with UO Secure and uowireless, specifically in the northwest quadrant of campus. Staff are working on the issue. ... Separately, EDUROAM is not working for anyone who is logging in with accounts from other institutions. Staff are working with EDUROAM to resolve the issue. The work-around is to have the guest of the university get sponsored for wifi access, then use the UO Guest network. Comment: 5/6/15: We are receiving reports of problems with UO Secure and uowireless. Staff are working on the issue. ... Separately, EDUROAM is not working for anyone who is logging in with accounts from other institutions. Staff are working with EDUROAM to resolve the issue. The work-around is to have the guest of the university get sponsored for wifi access, then use the UO Guest network. Comment: We have received reports that EDUROAM is not working for anyone who is logging in with accounts from other institutions. Staff are working to resolve the issue. The work-around is to have the guest of the university get sponsored for wifi access, then use the UO Guest network. Comment: We have received reports that EDUROAM is not working for anyone who is logging in with accounts from other institutions. Staff are working to resolve the issue. Comment: A portion of the EMU's basement is has no network connectivity this morning. Staff are working to replace the equipment. Comment: The network at the Erb Memorial Union (EMU) is unavailable. Staff are working on the issue. Comment: Users on the east side of campus may experience intermittent problems logging in to wireless. Staff are working on the issue. Comment: Campus wireless service will experience a brief outage on Monday, March 30 between 6:00AM and 6:15AM PDT to complete maintenance work. At one point during this period, wireless connections will drop and then automatically reconnect. Comment: Staff are working on the issue. Wireless is working intermittently across campus. Staff are working to restore service. Comment: UO Guest, eduroam, and UO Preauth are not working at the moment. These same wireless networks have been restored at OEMBA. Comment: Users may experience troubles connecting to UO Guest, eduroam, or UO Preauth. Staff have been notified and are working on the issue. Comment: As of about 2:45PM, UO Secure and uowireless both were working more consistently. Some intermittent problems may still exist. Staff continue to work to fully restore service. Comment: Users cannot connect to UO Secure or uowireless. Staff are working to restore service. The specific problem involves devices failing to get a valid IP address. Comment: Some users cannot connect to UO Secure. Staff are working to restore service. Specifically, some users are not able to get an IP address. Comment: A few users have reported a problem with wireless. Specifically, they are not able to get an IP address. Staff are looking in to the scope of the problem and ways to resolve the issue. Comment: The security certificate for UO Secure was updated on 12/19/14. If you cannot reconnect to UO Secure, connect to "uowireless", go to https://wireless.uoregon.edu, and run the installer. Comment: The power outage on Saturday (12/13) left a few wireless access points offline. Staff continue work today to restore all access points. Comment: We have received reports of some wireless outages on campus due to a power outage that began at 4:00pm. Staff are working to restore service. Comment: The eduroam network is currently unavailable to guests visiting UO. UO staff *off campus* will also be unable to use eduroam with their @uoregon.edu accounts. Staff are working to resolve this issue. Comment: The eduroam network is currently unavailable to guests visiting UO. Staff are working to resolve this issue. Comment: Staff are working on the issue. We are experiencing intermittent issues with our back end authentication which may impact wireless authentication and are actively pursing the issue. Comment: Staff are working on the issue. Wireless is down in the south end of Global Scholars Hall. Comment: Wireless will be intermittently unavailable on Thursday, Nov. 6 between 5:00AM and 6:30AM for the northeast quarter of campus. Other sections of campus will not be affected. Comment: Wireless service to the Health, Counseling, and Testing building will be unavailable during maintenance on 9/8/14 between 6AM and 7AM. Comment: Staff are working on the issue. UO Secure wireless in the McKenzie collaboration center is down. Comment: Staff are working on the issue. Wireless in the McKenzie Collaboration Center is down. Comment: UO Guest issues have been resolved. Comment: The Technology Service Desk has received reports of users having troubles connecting to UO Guest. Staff have been notified and are working on the issue. Comment: Service for UO Guest and eduroam was restored at 2:45PM PDT on 29-July-2014. Comment: At 2:30PM PDT, the wireless networks UO Guest and eduroam went offline. Staff are working to restore these networks. Comment: The UO Guest and Eduroam networks experienced a service degradation between 5:45PM and 7:00PM PDT on Friday, July 25. Comment: This service has been affected by a power outage at Pacific, Klamath, parts of Onyx, and the EMU. Power was lost at 11:40am. Comment: The login page for UO Guest wireless is not appearing properly, which prevents users from logging in. Staff are working to resolve the issue. Comment: Knight Library wireless will undergo maintenance between 5:00AM and 7:00AM PDT on Thursday, 5-June-2014. While no outage is expected, there is an increased chance of brief issues during this period. Comment: This service was down between noon and 12:4pm today (28-March-2014). Comment: PLC experienced a power outage at 9:46am today (5-May-2014). Wireless, Ethernet, and some telephones are unavailable until the power has been restored. Comment: The Eduroam network is currently unavailable to non-UO users. We are working to resolve the issue. Comment: On 1-May-2014 between 3am and 7am, this service may be slow or fail for brief periods. We are upgrading the service so that it works better during peak times. Comment: Work in Portland (White Stag and OEMBA) was completed successfully on 24-April-2014 between 5am and 7am PDT. Comment: For Portland (White Stag and OEMBA), we are doing work on 24-April-2014 between 5am and 7am PDT. Wireless will be sporadic during this two hour window. We are also doing work on the Eugene campus that same day between 6am and 7am PDT. No outage is expected, though you may see very brief outages during this time. Comment: We are doing work on 24-April-2014 between 6am and 7am PDT. No outage is expected, though you may see very brief outages during this time. Comment: We have received reports that some wireless users are dropping connections to UO Secure and cannot reconnect. Staff are working to resolve this problem. Comment: Service was restored at 8:20am. Comment: Part of Johnson Hall is experiencing a network outage due to a problem with network equipment. Staff are working to resolve the issue. This problem began approximately 7:48am today. Comment: The work scheduled for Tuesday, 4-March-2014, between 6am and 6:30am has been deferred. It will be scheduled for another day in the future. Comment: On Tuesday, 4-March-2014, between 6am and 6:30am, wireless connections will be unreliable on the following networks: UO Guest, UO Preauth, eduroam, and Athletics' networks. The interruption is due to brief maintenance work. Comment: The work performed at 5am on Thursday, 27-Feb-2014, was completed successfully. Comment: Between 5am and 7am on Thursday, 27-Feb-2014, wireless across all campuses (Eugene, Portland, and Charleston) will work intermittently as we upgrade software. Comment: Wireless service was restored at 2:45pm today (25-February-2014). Comment: The wireless problem has cascaded to the other half of campus and now affects all wireless. We are working to restore wireless service. Comment: The wireless problem has cascaded to the other half of campus. We are actively working to restore wireless service. Comment: Wireless on the west side of campus (west of University Street) began experiencing problems around 1:30pm. We are working to resolve the issue. Comment: Wireless on the west side of campus began having connectivity problems at 1:33pm today. Staff are working to resolve the issue. Comment: Weather-related power outages caused some wireless outages over the weekend. Please be careful of falling tees and limbs on campus. See alerts.uoregon.edu for information. Comment: The network is out of service at Moss Street Children's Center, Museum of Natural History, Autzen Stadium South Suite building, and Turner Construction. The outages are the result of weather-induced power outages. Ice accumulation on trees and power lines have created extremely hazardous conditions on campus. Avoid outdoor travel if possible. Power outages are possible, and when power is lost, wireless and wired networking are unavailable. See alerts.uoregon.edu for more weather information. Comment: Ice accumulation on trees and power lines have created extremely hazardous conditions on campus. Avoid outdoor travel if possible. Power outages are possible, and when power is lost, wireless and wired networking are unavailable. See alerts.uoregon.edu for more weather information. Comment: Extremely hazardous conditions exist on campus this afternoon due to ice accumulation on trees and power lines. Avoid outdoor travel if possible. Power outages are possible, and when power is lost, wireless and wired networking are unavailable. See alerts.uoregon.edu for more weather information. Comment: Extremely hazardous conditions exist on campus this afternoon due to ice accumulation on trees and power lines. Avoid outdoor travel if possible. Power outages are possible, and when power is lost, wireless and wired networking are unavailable. See http://alerts.uoregon.edu/2014/02/08/extremely-hazardous-conditions/ for more weather information. Comment: Attempts to log in to UO Secure, uowireless, or eduroam are working intermittently. Staff are engaged in resolving this issue. Comment: Currently, users cannot log in to UO Secure, uowireless, or eduroam. Staff are working to resolve the problem. Comment: Wireless is slow to connect on some devices during peak usage times, which are the middle of each workday. On Monday, 27-January-2014, between 5am and 6am, we will do work to address this issue. During that period, wireless will be unavailable. Comment: Wireless is slow to connect on some devices during peak usage times, which are the middle of each workday. On Monday, 27-January-2014, between 5am and 7am, we will do work to address this issue. During that period, wireless will be unavailable. Comment: Wireless will be unavailable or sporadic between 5:00am and 6:00am on Thursday and Monday, 23-January-2014 and 28-January-2014. We are performing work on the wireless network during these times. Comment: Wireless is unavailable on the second and third floors of the HEDCO Education building. The issue began around 5:45am this morning (7-Jan-2014). Staff are working to restore service. Comment: On 19-Dec-2013 and 20-Dec-2013 between 9am and 5pm, the Bowerman, Hayward, and Casanova facilities will experience brief wireless outages. (The EMU was upgraded on Monday and Tuesday, 16-Dec and 17-Dec.) On 20-Dec-2013 between 9am and 5pm, Heustis Hall and Cascade Hall will experience brief wireless outages. On 23-Dec-2013, the Student Health Center will experience brief wireless outages. Comment: On 18-Dec-2013 between 9am and 5pm, the following buildings will experience brief wireless outages: Earl Hall, Wilkinson House, HEP, and Chapman Hall. On 19-Dec-2013 and 20-Dec-2013 between 9am and 5pm, the Bowerman, Hayward, and Casanova facilities will experience brief wireless outages. (The EMU was upgraded on Monday and Tuesday, 16-Dec and 17-Dec.) On 20-Dec-2013 between 9am and 5pm, Heustis Hall and Cascade Hall will experience brief wireless outages. On 23-Dec-2013, the Student Health Center will experience brief wireless outages. Comment: Between 16-Dec-2013 and 18-Dec-2013, the EMU will experience brief wireless outages as the oldest wireless equipment is replaced. On 18-Dec-2013 between 9am and 5pm, the following buildings will experience brief wireless outages: Earl Hall, Wilkinson House, HEP, and Chapman Hall. On 19-Dec-2013 and 20-Dec-2013 between 9am and 5pm, the Bowerman, Hayward, and Casanova facilities will experience brief wireless outages. Comment: Between 16-Dec-2013 and 18-Dec-2013, the EMU will experience brief wireless outages as the oldest wireless equipment is replaced. On 18-Dec-2013 between 9am and 5pm, the following buildings will experience brief wireless outages: Earl Hall, Wilkinson House, HEP, and Chapman Hall. Comment: On 12-Dec-2013 between 6am and 7am, there will be brief wireless outages at Matt Knight Arena. Next week, between 16-Dec-2013 and 18-Dec-2013, the EMU will experience brief wireless outages as the oldest wireless equipment is replaced. On 18-Dec-2013 between 9am and 5pm, the following buildings will experience brief wireless outages: Earl Hall, Wilkinson House, HEP, and Chapman Hall. Comment: On 12-Dec-2013 between 6am and 7am, there will be brief wireless outages at Matt Knight Arena. Next week, between 16-Dec-2013 and 18-Dec-2013, the EMU will experience brief wireless outages as the oldest wireless equipment is replaced. Comment: The Hatfield-Dowlin Complex will experience wireless outages on 5-Dec-2013 between 5:30am and 6:30am as we reconfigure the network in that building. Only the Hatfield-Dowlin Complex, next to Autzen Stadium, will be affected. Comment: On Monday, 2-Dec-2013, the wireless network will experience brief outages between 5:30am and 6:30am. Comment: On Wednesday, 27-Nov-2013 between 8am and 5pm PST, the oldest wireless equipment will be replaced in HEP (1685 E 17th Ave) and in A&AA's building at 1479 Moss St. Brief wireless outages will occur during these times. Comment: We have two scheduled events. On Tuesday, 26-Nov-2013, between 5am and 7am PST, the wireless network will experience brief outages as we work to complete an upgrade started on Monday. On Wednesday, 27-Nov-2013 between 8am and 5pm PST, the oldest wireless equipment will be replaced in HEP (1685 E 17th Ave) and in A&AA's building at 1479 Moss St. Brief wireless outages will occur during these times. Comment: Today (11/25/2013), the oldest wireless equipment will be replaced in Zebrafish and the Museum of Natural History between 8am and 5pm. Later in the week, the oldest wireless equipment will be replaced in HEP (1685 E 17th Ave) and in A&AA's building at 1479 Moss St. on 11/27/2013 between 8am and 5pm. Brief wireless outages will occur during these times. Comment: Key wireless equipment on campus will be upgraded on 11/25/2013 between 3:30am and 7:00am to improve stability. The wireless network will be unavailable or work sporadically during this period. That same day (11/25/2013), the oldest wireless equipment will be replaced in Zebrafish and the Museum of Natural History between 8am and 5pm. Later in the week, the oldest wireless equipment will be replaced in HEP (1685 E 17th Ave) and in A&AA's building at 1479 Moss St. on 11/27/2013 between 8am and 5pm. Brief outages will occur during these times. Comment: Key wireless equipment will be upgraded on 11/25/2013 between 3:30am and 7:00am to improve stability. The wireless network will be unavailable or work sporadically during this period. Later in the week, the oldest wireless equipment will be replaced in HEP (1685 E 17th Ave) and in A&AA's building at 1479 Moss St. on Nov. 27 between 8am and 5pm. Brief outages will occur during this time. Comment: The oldest wireless equipment will be replaced in HEP (1685 E 17th Ave) and in A&AA's building at 1479 Moss St. on Nov. 27 between 8am and 5pm. Brief outages will occur during this time. Comment: At 10:05am, the wireless service on the west side of campus (west of University Street) experienced an outage approximately 40 minutes long. Comment: Wireless will be unavailable on Saturday, 11/16/2013 between 5:00am and 6:30am for important maintenance. Comment: The east side of campus (east of University Street) experienced a brief outage around 1:35pm today. Comment: On Thursday (14-Nov-2013) between 6:00am and 6:30am, WiFi will experience a brief outage. On Wednesday (13-Nov-2013) during business hours, there will be brief outages in Volcanology, Riley Hall, Hamilton Hall, Rainier Building, Esslinger Hall, and the Student Recreation Center as the oldest WiFi equipment in those buildings is replaced. Comment: On Thursday (14-Nov-2013) between 6:00am and 6:30am, WiFi will experience a brief outage. On Wednesday (13-Nov-2013) during business hours, there will be brief outages in Volcanology, Riley Hall, Hamilton Hall, Rainier Building, Esslinger Hall, and the Student Recreation Center as the oldest WiFi equipment in those buildings is replaced. Comment: Information Services will be replacing the oldest WiFi equipment in Volcanology, Riley Hall, Hamilton Hall, Rainier Building, Esslinger Hall, and the Student Recreation Center on Wednesday, Nov. 13. Between 8am and 5pm PST, users may experience a brief outage as equipment is replaced. Comment: Information Services will be replacing the oldest WiFi equipment in Volcanology, Riley Hall, Hamilton Hall, Rainier Building, Esslinger Hall, and the Student Recreation Center on Wednesday, Nov. 11. Between 8am and 5pm PST, users may experience a brief outage as equipment is replaced. Comment: The older WiFi access points in the Hendricks Hall, Collier House, Bowerman Building, Hayward Field, Gerlinger Hall, and Alder Building (818 E 15th Ave) will be replaced on Monday, Nov. 11 between 8am and 5pm. This work will take place one device at a time to minimize impact. However, users connected to an access point will experience a brief interruption or, in worst case scenario, need to reconnect to WiFi. Comment: The older WiFi access points in the Bowerman Building, Hayward Field, Gerlinger Hall, and Alder Building (818 E 15th Ave) will be replaced on Monday, Nov. 11 between 8am and 5pm. This work will take place one device at a time to minimize impact. However, users connected to an access point will experience a brief interruption or, in worst case scenario, need to reconnect to WiFi. Comment: The older WiFi access points in the Bowerman Building, Hayward Field, and Alder Building (818 E 15th Ave) will be replaced on Monday, Nov. 11 between 8am and 5pm. This work will take place one device at a time to minimize impact. However, users connected to an access point will experience a brief interruption or, in worst case scenario, need to reconnect to WiFi. Comment: The older WiFi access points in the Bowerman Building and Hayward Field will be replaced on Monday, Nov. 11 between 8am and 5pm. This work will take place one device at a time to minimize impact. However, users connected to an access point will experience a brief interruption or, in worst case scenario, need to reconnect to WiFi. Comment: The older WiFi access points in the School of Law will be replaced on Friday, Nov. 8 between 8am and 5pm. This work will take place one device at a time to minimize impact. However, users connected to an access point will experience a brief interruption or, in worst case scenario, need to reconnect to WiFi. Comment: Between 5:30am and 6:30am on Thursday, Oct. 17, WiFi will be sporadic or unavailable as we work to test and optimize a set of features. Comment: A portion of the WiFi network equipment will be restarted on Thursday morning (10/3/13) between 5am and 7am. You may lose your WiFi connection for a few minutes during this activity. Comment: The beginning of fall term places high demand on campus WiFi. Please plug in to the network when possible. Comment: The beginning of fall term places high demand on campus WiFi. Please plug in to the network when possible. More information will be posted here as it becomes available. Comment: Heavy usage due to the start of fall term is causing WiFi problems in Lawrence, Knight Law, the first and second floors of Condon, and some residence halls. Comment: On Tuesday, 9/24/2013, between 6pm and midnight, nearly all of campus WiFi will experience severely degraded performance for a 15 minute period. This service degradation is due to work being performed on the WiFi network. Comment: =On Tuesday, 9/24/2013, between 6pm and midnight, nearly all of campus WiFi will experience severely degraded performance for a 15 minute period. This service degradation is due to work being performed on the WiFi network. Comment: On Tuesday, 9/24/2013, between 6pm and 9pm, WiFi on campus east of University Avenue will experience severely degraded performance for a 15 minute period. This service degradation is due to work being performed on the WiFi network. Comment: This service may be unavailable or intermittent.
2019-04-19T00:47:52Z
https://status.uoregon.edu/status-history/15
2015-09-16 Assigned to KIERAN MURPHY LLC reassignment KIERAN MURPHY LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MURPHY, KIERAN P. 2017-05-12 Assigned to IZI MEDICAL PRODUCTS, LLC reassignment IZI MEDICAL PRODUCTS, LLC CONFIRMATORY PATENT ASSIGNMENT Assignors: KIERAN MURPHY, LLC, KIERAN P.J. MURPHY, M.D. A/K/A KIERAN P. MURPHY, M.D. An embodiment of the present invention provides a kit of parts for use in a surgical procedure performed under image guidance, and particularly under real time image guidance. The kit includes a sterilized drape for use with the chosen imaging machine and which can be used to provide a sterile operating environment when the procedure is performed under the imaging beam. The kit also includes a needle holder that can keep the surgeon's hand away from the imaging beam. The needle holder is operable to hold a needle that is made from a material suitable for piercing tissue, but also substantially preserves the appearance of the needle when it is viewed under the imaging beam. This application is a continuation-in-part of U.S. patent application Ser. No. 13/858,609 filed on Apr. 8, 2013 which is a continuation of U.S. patent application Ser. No. 13/190,830 filed on Jul. 26, 2011, now abandoned, which is a divisional of U.S. patent application Ser. No. 11/081,494 filed on Mar. 17, 2005, now abandoned, which is a divisional of U.S. patent application Ser. No. 10/373,835 filed on Feb. 27, 2003, now abandoned, which claims priority to U.S. Provisional Application No. 60/366,529 filed on Mar. 25, 2002 and U.S. Provisional Application No. 60/366,530 filed on Mar. 25, 2002, the disclosures of which are all incorporated by reference herein in their entirety. The present invention relates generally to image guided surgery and more particularly relates to a kit of parts, and the individual parts of the kit, for use in navigation during a surgical procedure. Over one million CT-guided biopsies are performed per year in the US. There are two million ultrasound-guided biopsies a year. Many of these ultrasound biopsies are performed because computerized tomography (“CT”) is not available. Ultrasound is also traditionally faster than CT, as there is the availability of substantially real time imaging. Traditionally, CT required the acquisition of an image, the passage of a needle, the acquisition of another image and the repositioning of the needle to be checked by acquisition of another image. With this process a biopsy could take hours and it was hard to keep track of the needle tip relative to the patient and know if it was necessary to angle up or down to get to the target. The recent availability of CT fluoroscopy has radically changed management of patients. With CT fluoroscopy, cross sectional images of the body are obtained which are refreshed up to thirteen times a second. Further increases in the refresh rate are believed by the inventor to be a reasonable expectation. With some CT scanners three slices can be presented simultaneously, all being refreshed thirteen times a second. This can create a substantially flicker-free image of a needle or device being passed into the patient. This has the potential to increase speed, accuracy and ability to safely deliver needles to sensitive or delicate structures and avoid large blood vessels. However, there are drawbacks and limitations to CT fluoroscopy. These mainly relate to issues of infection clue to the procedure and radiation safety for the physician. For example, during the passage of the needle by the physician's hands into the patient under substantially real time x-ray guidance, the physician's hand is in the x-ray beam. This can result in an accumulation of excessive radiation dose to the physician's hand. The physician may perform the procedure repeatedly during his career or even during a single day and this cumulative dose becomes an issue of personal radiation safety. Furthermore, current biopsy needles are composed of metal (e.g. stainless steel) that generates significant artifacts when used with x-ray detectors of CT quality. These artifacts are related to the density of the metals used in these needles. These artifacts are called beam-hardening artifacts. These artifacts can obscure the intended target or obscure an important structure and possibly make it possible for inadvertent injury of the target. Accordingly, current biopsy needles are not generally suitable for CT image guided surgical procedures. A further disadvantage of the prior art is that needles that are currently used for biopsies typically have the stylet attached to the trocar loosely, yet such a loose attachment can present certain hazards when using such a needle under CT imaging. A further disadvantage of the prior art is that, since CT machines are typically used for simple capturing of images, they are typically non-sterile, and therefore, under CT image guidance procedures, elaborate sterilization can be necessary to reduce risk of patient infection. Simplified sterilization techniques are therefore desirable. It is therefore an object of the invention to provide a kit of parts for image guided surgical procedures that obviates or mitigates at least one of the above-identified disadvantages of the prior art. In an aspect of the invention there is provided a sterile needle holder that allows the transmission of force from the physician's hand to the needle so that the needle can be guided into the patient without requiring the physician to have his hand in the x-ray beam during the procedure. It is presently preferred that the needle holder be made from materials such that artifacts are not generated (or artifacts are desirably reduced) that would obscure the target, (i.e. radio lucent). It is therefore desirable to provide needles of decreased density. The unit of density used for CT is the Hounsfield unit after the inventor of CT, Sir Godfrey Newbold Hounsfield. Hounsfield units quantify the radiopacity (i.e. radiodensity) of a material—that is, the extent to which the material impedes the passage of radiation such as the x-rays used in CT. The radiopacity of a material under CT scanning, in Hounsfield units, is generally proportional to the physical density of the material (see Kim S. Lee G H, Lee S, Park S H, Pyo H B, Cho J S, “Body fat measurement in computed tomography image”, Biomed Sci Instrum. 1999; 35: pages 303-308). A Hounsfield unit of zero is attributed to the density of water on CT, bone is higher in density than water, fat is lower in density than water. Fat therefore has a negative Hounsfield number. According to an aspect of the invention there is provided needles that are composed of metals or composites that are visible on CT but have a reduced likelihood of showing artifacts under CT. Needles are composed of two parts, an outer trocar and an inner stylet. Either one or the other or both can be made of diminished Hounsfield unit density material. It can be thus desirable to construct a stylet made of carbon fiber or plastic. Aluminium or Nitinol or Inconel, are metals that are MRI compatible and may be valuable for CT purposes while at the same time being useful for MRI. In another aspect of the invention there is provided a biopsy needle wherein the stylet is attached to the trocar via a locking means or attachment means, such as a Luer Lock™ or a simple screw system. The locking biopsy needle is thus used under CT image guidance, advanced using the needle holder. The locking needle thus can be unlocked at the desired time and reduce the likelihood of trauma or injury to the patient during navigation under CT image guidance. In another aspect of the invention there is provided a drape that reduces contamination of the operator's hand against the side of the CT scanner. For conventional angiography, a sterilized plastic bag with an elasticated top is placed around the image intensifier and used like a sack. In a CT machine, there is provided a donut-shaped configuration and the patient passes through the central hole of the donut. Preferably, such a drape is disposable, but re-sterilizable drapes are also within the scope of the invention. It is presently preferred that the drape would be like a basketball hoop. In this particular implementation of this aspect of the invention, the basketball-hoop like drape is attachable to the open ends of the CT scanner by any suitable attachment means, such as either or a combination of a) adhesive, b) preplaced hoops affixed to the CT scanner and whereby such hoops would attach by an elasticated band to the drape; c) the drape could be made from a metal that is foldable and therefore transportable, though when released from its package would have a radial force such that it would affix the drape to either side of the CT scanner. Such a material could be Nitinol, from Nitrol Devices and Components, 47533 Westinghouse Drive, Fremont, Calif. 94539. In another aspect of the invention there is provided a kit for use in CT guided image fluoroscopy, comprising: (1) a needle holder for keeping the operator's hand out of the beam; (2) a needle of diminished beam hardening artifact inducing potential; (3) a lock to fax the stylet with regard to the trocar in an appropriate position; and (4) a drape to protect the operator's hand from contamination. In another aspect of the invention there is provided a kit of parts for use in an image guided surgical procedure using a substantially real time imaging machine comprising: a needle holder having a grasping means and a handle depending therefrom, the handle being configured such that the grasping means can be exposed to the imaging beam and an operator's hand can be distal from the imaging beam in relation to the grasping means; a needle attachable to the grasping means and having a rigidity to travel through mammalian tissue to a target area and having a radiopacity that substantially preserves an appearance of the needle when the needle is viewed on a display of the real time imaging machine; and a sleeve for attachment to the real time imaging machine that provides a substantially sterile operating environment for using the needle when attached to the machine. In a particular implementation of the foregoing aspect, a locking mechanism is associated with at least one of the grasping means and the needle for releasably locking the needle to the needle holder. In another aspect of the invention there is provided a surgical instrument for use in an image guided surgical procedure using a substantially real time imaging machine comprising: a needle holder having a grasping means and a handle depending therefrom, the handle being configured such that the grasping means can be exposed to the imaging beam and an operator's hand can be kept a distance away from the imaging beam; and a needle attachable to the grasping means and having a rigidity to travel through mammalian tissue to a target area and having a radiopacity that substantially preserves an appearance of the needle when the needle is viewed on a display of the real time imaging machine. In a particular implementation of the foregoing aspect, the needle is a trocar comprising a cannula and a stylet receivable within the cannula. In another aspect of the invention there is provided a sterile drape for attachment to a real time imaging machine comprising: a sheet of material for providing a substantially sterile barrier between the imaging machine and a patient; and an attachment means for affixing the body to the imaging machine. In a particular implementation of the foregoing aspect, the sheet of material is plastic and substantially tubular. In a particular implementation of the foregoing aspect, the imaging machine has a pair of annular lips that flare outwardly from a respective opening of the machine and wherein the attachment means comprises an annular shaped elastic integral with each open respective ends of the sheet, each of the elastics for grasping a respective lip. In a particular implementation of the foregoing aspect, the drape is umbrella-like, in that the material is plastic and the attachment means is a series of series of rods integrally affixed to the plastic, the rods made from a springed material such that the sleeve has a first position wherein the sleeve is collapsed and a second position wherein the sleeve is outwardly springed. In a particular implementation of the foregoing aspect, the material is nitinol and the attachment means is achieved through configuring the nitinol to be outwardly springed. In a particular implementation of the foregoing aspect, the attachment means is selected from the group consisting of velcro, ties, or snaps. In another aspect of the invention there is provided an imaging machine comprising a channel for receiving a patient and exposing the patient to a substantially real time imaging beam. The machine also includes an attachment means for affixing a sterile drape to the channel, such that when the sterile drape is attached thereto a substantially sterile barrier between the channel and the patient is provided, thereby providing a substantially sterile environment for the patient. In a particular implementation of the foregoing aspect, the attachment means is comprised of a pair of annular lips flanged so as to provide a secure attachment to a pair of annular elasticized openings of a sterile sleeve. In a particular implementation of the foregoing aspect, the beam is selected from the group consisting of CT, MRI, and X-Ray. In a particular implementation of the foregoing aspect, a refresh rate of the real-time imaging beam is greater than, or equal to, about thirteen frames per second. The rate can be greater than about thirty frames per second. The rate can also be greater than about fifty frames per second. In other implementations, however, it is contemplated that the refresh rate can be as low as about one frame per second, depending on the actual procedure being performed and/or the imaging device being used. FIG. 11B is a partial side elevational view of a stylet of the needle of FIG. 6, according to an embodiment. Referring now to FIGS. 1 and 2, a computerized tomography (“CT”) imaging machine in accordance with an embodiment of the invention is indicated generally at 30. CT Machine 30 is composed of a chassis 34 and a channel 38 through which a patient is received in order to capture the desired images of the patient and/or perform any desired procedures. A presently preferred CT machine for use in the present embodiment is an imaging machine capable of generating substantially real time images. In order to generate images in substantially real time, the imaging machine can generate images at a rate of about fifty frames per second or greater. However, substantially real time images suitable for the present embodiment can also be generated by machines capable of generating images at a rate of about thirty frames per second or greater. However, substantially real time images suitable for the present embodiment can also be generated by a machine capable of generating images at a rate of about thirteen frames per second or greater. A presently preferred substantially real time imaging machine is the Toshiba Aquillon, a CT machine, which generates images at a rate of about thirteen frames per second for use in performing procedures under CT image guidance. As will be understood by those of skill in the art, chassis 34 in FIGS. 1 and 2 is a simplified representation used for purposes of explaining the present embodiment, and thus also contain the requisite imaging beam technology to provide the desired CT imaging functionality. Thus, machine 30 is further characterized by a pair of annular lips 42 a and 42 b, (or other attachment means) that flare outwardly from a respective opening of channel 38 and away from chassis 34. Each lip 42 attaches to chassis 34 at the periphery of channel 38, where channel 38 meets chassis 34 at the ends of machine 30. Further details on machine 30 and lips 42 and its use will be discussed in greater detail later below. Referring now to FIG. 3, a sterile sleeve is indicated generally at 46, in accordance with another embodiment of the invention. In the present embodiment, sleeve 46 is comprised of a substantially tubular sterilized plastic sheet (or other suitably flexible material that will not interfere with the imaging beam of machine 30). While not shown in the Figures, sleeve 46 is thus typically pre-sterilized and then folded for storage (all while maintaining sterility) within a sterile packaging. The sterile packaging is thus not opened until sleeve 46 is put into use, and only then opened under acceptable and/or desirable sterile conditions. Sleeve 46 is thus further characterized by a pair of annular openings 50 a and 50 b interconnected by a continuous plastic sheet 54. Each opening 50 a and 50 b is further characterized by an elastic 58 a and 58 b encased within the periphery of its respective opening 50 a and 50 b. Referring again to FIGS. 1 and 2, in conjunction with FIG. 3, the length of sheet 54 between each opening 50 a and 50 b is substantially the same as the length between each lip 42 a and 42 b. Further, the diameter of sheet 54 typically will substantially match the variation in the diameter of channel 38 along its length, the diameter of sheet 54 being slightly smaller than the diameter of channel 38 therealong. Referring now to FIGS. 4 and 5, sleeve 46 is shown assembled to machine 30. In order to perform such assembly, the packaging containing sleeve 46 is opened, in sterile conditions, and sleeve 46 is unfolded, just prior to the use of machine 30 for capturing images and/or for performing a procedure under image guidance. Accordingly, to assemble sleeve 46 with machine 30, elastic 58 a of opening 50 a is first stretched and passed over lip 42 a, thereby securing opening 50 a to lip 42 a, and widening opening 50 a so that it is substantially the same size as the opening of channel 38. Next, the remainder of sleeve 46 including sheet 54 and opening 50 b are passed through channel 38 towards and through the opening of channel 38 opposite from lip 42 a. Elastic 58 b is then stretched so that opening 50 b extends over lip 42 b, thereby securing opening 50 b to lip 42 b, thereby completing the assembly of sleeve 46 to machine 30, as seen in FIGS. 4 and 5. Accordingly, CT machine 30 can now be used in a sterile manner. When the use of CT machine 30 is completed, sleeve 46 can simply be disassembled therefrom by substantially reversing the above-described assembly steps, and then disposed of, or re-sterilized, as desired and/or appropriate to provide patient safety. It will now be understood that sleeve 46 and machine 30 are complementary to each other, and thus, the various components and dimensions of sleeve 46 are chosen to correspond with the complementary parts on machine 30. Thus, for example, elastics 58 are chosen to have a material and elasticity such that assembly of an opening 50 to a corresponding lip 42 can be performed with relative ease. In other words, the elasticity is chosen so that the person performing the assembly will not have to apply undue force to actually expand elastic 58 and fit it around lip 42. By the same token the elasticity of elastic 58 is sufficiently strong to ensure a reliable attachment of opening 50 to the corresponding lip 42 during the capturing of images or performance of a surgical procedure under image guidance. Furthermore the diameter of sheet 54 is chosen so as to not substantially reduce the diameter of channel 38 after assembly. The material of sheet 54 is also chosen so as to not interfere with the imaging beam generated by machine 30. It should also now be understood that sheet 54 can be constructed in different shapes to complement different types and shapes of imaging machines that are capable of providing substantially real time images and thereby could benefit from the sterile sleeve of the present invention. In particular, sheet 54 may only have one opening 50, depending on the type of imaging machine with which it is used. By the same token, it will be understood that any variety of mechanical substitutes to the cooperating lips 42 and elastics 58 can be provided, and that such substitutes are within the scope of the invention. Thus, in general, any cooperating attachment means between sleeve 46 and machine 30 can be provided, and such varied cooperating attachment means are within the scope of the invention. For example, of hooks and loops, velcro, ties, and/or snap-buttons or the like can be used as cooperating attachment means. By the same token, it is to be understood that lip 42 (or an suitable mechanical equivalent) can be retrofitted onto existing CT machines, or built directly thereto, as desired. Furthermore, the location of the cooperating attachment means on machine 30 and sleeve 46 need not necessarily be limited to the respective distal ends of machine 30 and sleeve 46, but need only result in the ability to assemble sleeve 46 to machine 30 while leaving a suitable and appropriately substantially sterile passageway within channel 38 for receiving a patient. In another variation of the foregoing, sleeve 46 could be made from a rigid material, or an outwardly springed material, to thereby obviate the need for lip 42 or any means of attachment actually connected to machine 30. Referring now to FIGS. 6-8, a needle system for use under substantially real time image guidance is indicated generally at 100 and is in accordance with another embodiment of the invention. Needle apparatus 100 comprises a needle holder 104 and a trocar 108, which itself is comprised of a stylet 112 and a cannula 116. Needle holder 104 is typically made of a plastic or other material that does not appear under CT image guidance (or under the imaging beam of the particular imaging machine being used). Holder 104 is comprised of a handle portion 120 and a grasping portion 124. In a present embodiment, handle portion 120 depends from grasping portion at an angle “A” greater than about ninety degrees, however, handle portion 120 can actually depend from grasping portion 124 at ninety-degrees or any other desired angle, depending on the procedure being performed, and the preferences of the surgeon or other medical professional performing the procedure. In a present embodiment, handle portion 120 is substantially cylindrical, but can be any desired shape and length, again depending on the preferences and/or needs of the procedure and/or surgeon. Grasping portion 124 is also substantially cylindrical, but is further characterized by a hollow channel 130 through which cannula 116 can be passed, and it is presently preferred the hollow channel 130 is of a slightly larger diameter than cannula 116 to securely hold cannula 116 within grasping portion 124. In a present embodiment, grasping portion 124 includes a set of interior threads 128 located on the portion of grasping portion 124 located nearest handle portion 120. Cannula 116 is comprised of a hollow shaft 132 with a tip 136. Tip 136 has a desired shape for piercing the target area of the patient in a desired manner. It is presently preferred that shaft 132 be made from a material that is hard enough to pierce the patient's target area, yet also made from a material that presents reduced and/or minimal artifacts when shaft 132 is viewed under a CT imaging beam using a CT machine, (such as machine 30 shown in FIG. 1), such that appearance of shaft 132 is substantially preserved when viewed under such an imaging beam. Cannula 116 is also characterized by a set of exterior threads 138 towards the proximal end 140 of cannula 116 opposite from tip 136. Exterior threads 138 are thus complementary to interior threads 128 of grasping portion 124, such that trocar 108 can be releasably secured to grasping portion 124. Cannula 116 is also characterized by a set of interior threads 144 at the proximal end 140 of cannula 116, proximal end 140 also being made from a material that presents reduced and/or minimal artifacts when viewed under a CT imaging beam such that appearance of proximal end 140 is substantially preserved when viewed under such an imaging beam. Stylet 112 is comprised of a needle having a solid shall 148 including a point 152 at its distal end. Point 152 is complementary to tip 136, and the length of shaft 148 is substantially the same length as shaft 132, such that when stylet 112 is inserted within and assembled to cannula 116, point 152 and tip 136 form a contiguous shape. Solid shall 148 is preferably made from substantially the same material as shaft 132, such that shaft 148 is hard enough and/or rigid to pierce a target area T within the patient, yet also made from a material that presents reduced and/or minimal artifacts and/or no artifacts when shaft 132 is viewed under a CT imaging beam using a CT machine, (such as machine 30 shown in FIG. 1), such that appearance of stylet 112 is substantially preserved when viewed under such an imaging beam. Suitable materials can include, for example, certain carbon fibres, inconel etc. Other materials will now occur to those of skill in art. Stylet 112 is also characterized by a set of exterior threads 156 at the proximal end 160 of stylet 112 opposite from point 152. Proximal end 160 is also made from a material that presents reduced and/or minimal artifacts when viewed under a CT imaging beam, again, such that appearance of proximal end 160 is substantially preserved when viewed under such an imaging beam. Exterior threads 156 are thus complementary to interior threads 144, such that stylet 112 can be releasably secured to cannula 116. As discussed above, the materials from which one or both of cannula 116 and stylet 112 are manufactured have lower radiopacity (that is, lower Hounsfield values) as compared to conventional needle components (which are generally stainless steel) in order to reduce the incidence of artifacts under CT imaging. The material from which one or both of cannula 116 and stylet 112 are manufactured is selected from the group including aluminum, carbon fiber, plastic, nitinol and inconel. Known examples of plastics include nylon, Poly(methyl methacrylate) (PMMA, also known as Lucite or acrylic), Polyether ether ketone (PEEK), polycarbonate and polyethylene. As seen in Table 1, materials such as plastics, carbon and aluminum, from which one or both of cannula 116 and stylet 112 can be made, have lower Hounsfield values than other metals such as iron, and also present reduced artifacts under CT imaging. Aluminum, PMMA and carbon have lower physical densities than iron and stainless steel. Use of apparatus 100 is represented in FIG. 1 and FIGS. 7-9. In use, assembled needle apparatus 100 as shown in FIG. 7 is grasped by a surgeon by handle portion 120, towards or at the end of handle portion 120 opposite from grasping portion 124. Thusly grasped, trocar 108 and grasping portion 124 are then placed within the imaging beam (e.g. the beam within channel 38 of machine 30 in FIG. 1) when the machine is “on”, the surgeon being careful to keep his or her hand out of the imaging beam. Trocar 108 is thus viewed on the display of machine 30, and guided to the target area of the patient also located within channel 38. Trocar 108 can thus be used in any desired procedure under such image guidance while keeping the surgeon's hand from harm's way. For example, as seen in FIG. 8, trocar 108 is shown piercing through brain tissue towards a target area T inside the patient. As seen in FIG. 9, stylet 112 is removed from cannula 116 by first disengaging threads 156 from threads 144, thereby leaving a hollow channel between the exterior of the patient and the target area T. This hollow channel can then be used in any desired manner, such as to drain excess cerebral spinal fluid, to treat a clot and/or to insert a catheter according to the shunt implantation method taught in the copending US Formal patent application entitled “Method, Device and System for implanting a Shunt” filed on Feb. 11, 2003. It is to be understood that various combinations, subsets and equivalents can be employed in the foregoing description of apparatus 100. For example, any one or more of pairs of threads 156 and 144, or 138 and 128, can be reversed and/or substituted for a Luer-Lock™ system. Furthermore, any one of pairs of threads 156 and 144, or 138 and 128 could be replaced by a clamping mechanism. For example, grasping portion 124 could be replaced with a mechanical clamp that surrounds proximal end 140 of cannula 116. Referring now to FIG. 10, a kit for performing image guided surgical procedures is indicated generally at 200. Kit 200 comprises a sterile package 204 which includes two sterile compartments 208 and 212. Compartment 208 houses sleeve 46 and compartment 212 houses apparatus 100. Kit 200 can then be distributed to hospitals and clinics. Prior to performing a surgical procedure, compartment 208 can be opened and sleeve 46 applied to the corresponding CT machine. When the patient is prepped, compartment 208 can be opened and the apparatus 100 therein used as previously described. Kit 200 can include such other components as desired to perform a particular procedure under substantially real time image guidance. Variations of the structures of one or both of stylet 112 and cannula 116 (which together comprise trocar 108) are contemplated. In addition to the reduced radiopacity (compared to conventional stainless steel devices) described above, cannula 116 can include protrusions on the inner surface thereof (that is, inside the bore of cannula 116). The nature of the protrusions is not particularly limited. For example, referring to FIG. 11A, a variation of cannula 116 is illustrated in which a distal portion thereof bears at least one inner surface feature 1100. Inner surface features 1100 can have a variety of structures, and in general inner surface features 1100 provide at least one discontinuity on the inner surface of cannula 116. Examples of inner surface features 1100 include ridges projecting from the inner surface of cannula 116 into the lumen or bore of cannula 116, and grooves cut into the inner surface of cannula 116 (in other words, projecting into the wall of cannula 116, away from the lumen). Ridges and grooves may also be combined in a single cannula 116. Both the ridges and grooves can have a variety of spacing and angles. For example, FIG. 11A shows inner surface features 1100 as grooves cut into the inner wall of cannula 116, each groove disposed on a plane substantially parallel to a longitudinal axis 1104 of cannula 116. In other examples, the grooves can be provided in the form of rifling (e.g. one or more helical grooves cut into the inner surface—these may also be referred to as threads). Ridges can also be implemented at any suitable angle or spacing, and can also be discrete annular features (as in FIG. 11A) or helical ridges. In some embodiments, the entire length of cannula 116 can include inner surface features 1100, however in the presently preferred embodiment, inner surface features 1100 are included only in a portion of cannula 116 adjacent to the distal end, for example along the distal ten percent of the length of cannula 116. Other types of inner surface features are contemplated, including a sandblasted or acid-etched surface, and the like. As mentioned above, various types of inner surface features 1100 may be combined in a single cannula 116. Inner surface features 1100 may increase the effectiveness of cannula 116 in retaining patient tissue for a biopsy, as the friction between the tissue and the inner wall of cannula 116 bearing inner surface features 1100 inner wall may be greater than the friction between the tissue and a cannula lacking inner surface features 1100. In addition, in some embodiments inner surface features 1100 may act to further reduce the radiopacity of cannula 116. For example, when inner surface features 1100 comprise grooves cut into the inner surface of cannula 116, the thickness of the wall of cannula is reduced in the locations of the grooves, which may reduce radiopacity. Referring now to FIG. 11B, a variation of stylet 112 is depicted. Stylet 112 generally includes a substantially solid shaft of a first material (which, as described earlier, may be a material having a lower radiodensity than stainless steel). In the present embodiment, stylet 112 additionally includes at least one particle 1104 of a second material defining an interface between the above-mentioned first material and the second material. The size and shape of particles 1104 are not particularly limited. In the present example, particles 1104 are bubbles in the shaft of stylet 112, and thus the second material may be a fluid such as air, nitrogen gas, and the like. In other embodiments, particles 1104 may be 25 solid bodies of any suitable materials that are different from the first material of which the shaft of stylet 112 is composed. For example, when stylet 112 is plastic, particles 1104 may be provided by bodies of fluid, metal, a different plastic, and the like, embedded in stylet 112. Particles 1104 may increase the visibility of stylet 112 under certain imaging modalities, particularly acoustic modalities such as ultrasound. This may be particularly advantageous when cannula 116 is composed of a material such as plastic or carbon fiber, which may have lower ultrasound visibility than various metals. While only specific combinations of the various features and components of the present invention have been discussed herein, it will be apparent to those of skill in the art that desired subsets of the disclosed features and components and/or alternative combinations of these features and components can be utilized, as desired. For example, while the embodiments discussed herein refer to CT machines, it is to be understood that the teachings herein can be applied to any type of imaging machine capable of generating substantially real time images, such as machines based computerized tomography (“CT”), magnetic resonance (“MR”), or X-Ray. The above-described embodiments of the invention are intended to be examples of the present invention and alterations and modifications may be effected thereto, by those of skill in the art, without departing from the scope of the invention which is defined solely by the claims appended hereto. wherein the solid shaft further comprises at least one particle of a third material embedded in the second material for defining an interface between the second material and the third material. 2. The needle of claim 1, wherein the inner surface comprises a plurality of inner surface features adjacent to the open distal end. 3. The needle of claim 2, wherein the inner surface features comprise one of: grooves in the inner surface and ridges extending into the lumen from the inner surface. 4. The needle of claim 2, wherein the inner surface features are helical. 5. The needle of claim 2, wherein the inner surface features are substantially perpendicular to a longitudinal axis of the cannula. 6. The needle of claim 1, wherein the at least one inner surface feature is present along a fraction of a length of the cannula adjacent the open distal end. 7. The needle of claim 1, wherein the first material has a lower radiopacity than the radiopacity of stainless steel, such that an appearance of the cannula generated by a CT imaging apparatus presents reduced imaging artifacts. 8. The needle of claim 1, wherein the first material has a Hounsfield value of less than 1900. 9. The needle of claim 8, wherein the first material comprises one of plastic and carbon fiber. 10. The needle of claim 1, wherein the second material has a Hounsfield value of less than 1900. 11. The needle of claim 1, wherein the second material comprises one of plastic and carbon fiber. 12. The needle of claim 1, wherein the third material has a density lower than the density of the second material, such that the interface increases the visibility of the stylet under an acoustic imaging modality. 13. The needle of claim 1, wherein said at least one particle comprises bubbles. 14. The needle of claim 13, wherein the third material is a fluid selected from the group consisting of air and nitrogen gas. A.C. Kak and Malcolm Slaney, "Principles of Computerized Tomographic Imaging", Chapter 4 (Measurement of Projection Data-The Nondiffracting Case), pp. 113-134, IEEE, 1988 (22 pages). A.C. Kak Malcolm Slaney, "Principles of Computerized Tomographic Imaging", Chapter 4 (Measurement of Projection Data-The Nondiffracting Case), pp. 113-134, IEEE, 1988. Achenbach et al., Non-invasive coronary angiography by electron beam tomography: methods and clinical evaluation in the follow-up after PTCA, 2 Kardiol., Feb. 1997; 86(2):121-30 (10 pages). Achenbach et al., Noninvasive coronary angiography by retrospectively ECG-gated multislice spiral CT, Circulation. Dec. 5, 2000:102(23):2823-6 (6 pages). Achenbach S, Moshage W, Ropers D, Nossen J, Bachmann K, Non-invasive coronary angiography with electron beam tomography: methods and clinical evaluation in post-PTCA follow-up Z Kardiol., Feb. 1997; 86(2):121-30. Achenbach S, Ulzheimer S, Baum U, Kachelriess M, Ropers D, Giesler T, Bautz W. Daniel W G, Kalender W A, Moshage W, Noninvasive Coronary Angiography by Retrospectively ECG-Gated Multislice Spiral CT, Circulation, Dec. 5,2000: 102(23):2823-8. Achenbach S, Ulzheimer S,Baum U, Kachelriess M. Ropers D, Giesler T. Bautz W, Daniel W G, Kalender W A, Moshage W, Noninvasive coronary angiography by retrospectively ECG-gated multislice spiral CT. Circulation. Dec. 5, 2000:102(23):2823-8. Achenbach S. Moshage W, Ropers D, Nossen J Bachmann K, Non-invasive Coronary Angiography with Electron Beam Tomography: Methods and Clinical Evaluation in Post-PTCA Follow-up Z Kardiol, Feb. 1997; 86(2) 121-30. Allan W. Heldman, MD et al., Paclitaxel Stent Coating Inhibits Neointimal Hyperplasia at 4 weeks in Porcine Model of Coronary Restenosis, published by the American Heart Association, Inc., pp. 2289-2295, May 2001. Allan W. Heldman, MD, et al., "Paclitaxel Stent Coating Inhibits Neointimal Hyperplasia at 4 weeks in a Porcine Model of Coronary Restenosis," published by American Heart Association, Inc., p. 2289-2295, May 2001 (7 pages). Allan W. Heldman. MD, et al., "Paclitaxel Stent Coating Inhibits Neointimal Hyperplasia at 4 weeks in a Porcine Model of Coronary Restenosis," published by American Heart Association, Inc., p. 2289-2295, May 2001 (7 pages). Antezanna DF, et al., "High-dose Ibuprofen for reduction of striatal infarets during middle cerebral artery occlusion in rats," http://www.ncbi.nlm.nih.gov:80/entrez/query .fcgi?cmd=PubMed&list-uids+12691413&dopt=Abstract; last visited on Oct. 1, 2003 (2 pages). Antezanna DF, et al., "High-dose Ibuprofen for reduction of striatal infarets during middle cerebral artery occlusion in rats,"http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=PubMed&list-uids+12691413&dopt=Abstract; last visited on Oct. 1, 2003 (2 pages). Anthony R. Kovscek et al., "Stanford University Petroleum Research Institute Preliminary Twenty-Fourth Annual Report", Apr. 19-21, 2001 (68 pages). Becker C R, Schoepf U J, Reiser M F., Methods for quantification of coronary artery calcificatins with electron beam and conventional CT and pushing the spiral CT envelope: new cardiac applications. Int J Cardiovasc Imaging, Jun. 17, 2001;(3):203-11. Becker C R, Schoepf U J, Reiser M F., Methods for Quantification of Coronary Artery Calsifications with Electrogram Beam and Conventional CT and Pushing the Spiral CT Envelope; New Cardiac Applications, Int J. Cardiovasc Imaging Imaging, Jun. 17, 2001; (3): 203-11. Becker et al., Methods for quantification of coronary artery calcifications with electron beam and conventional CT and pushing the spiral CT envelope: new cardiac applications. Int J Cardiovasc Imaging, Jun. 17, 2001;(3):203-11 (9 pages). Canadian Office Action dated Mar. 9, 2011 for Application No. 2,455,439. Choquette et al. Direct Selenium X-Ray Detector for Fluoroscopy, R&F, and Radiography, 2000, In Medical Imaging 2000: Physics of Medical Imaging, Proceedings of SPIE vol. 3977, pp. 128-136. David Maintz, et al., "Revealing In-Stent Stenoses of the Ilac Arteries: Comparison of Multidelector CT with MR Angiography and Digital Radiographic Angiography in a Phantom Model," AJR:179, p. 1319-1322, Nov. 2002 (4 pages). David Maintz, et al., "Revealing In-Stent Stenoses of the Iliac Arteries: Comparison of Multidetector CT with MR Angiography and Digital Radiographic Angiography in a Phantom Model," AJR:179, p. 1319-1322, Nov. 2002 (4 pages). Dolmach, Bart, MD., et al., "Patency and Tissue Response Related to Two Types of Polytetrafluroethylene-Covered Stens in the dog," Journal of Vascular and Interventional Radiology, vol. 7 No. 5 Sep.-Oct. 1996, pp. 642-649. Dolmatch, Bart, M.D., et al., "Patency and Tissue Response Related to Two Types of Polytetrafluoroethylene-Covered Stents in the Dog," Journal of Vascular and Interventional Radiology, vol. 7, No. 5, Sep.-Oct. 1996, pp. 642-649 (10 pages). Dolmatch, Bart, M.D., et al., "Patency and Tissue Response Related to Two Types of Polytetrafluoroethylene-Covered Stents in the Dog," Journal of Vascular and Interventional Radiology, vol. 7, No. 5, Sep.-Oct. 1996, pp. 642-649 (9 pages). Dr. Hans-Ulrich Laasch et al., "Revision Notes for the FRCR Part 1", The Society of Radiologists in Training, 1999 (60 pages). Drake et al. The Shunt Book, Copyright 1995 Blackwell Science in Massachusettes. Drake et al., The Shunt Book, COPYRGT. 1995 Blackwell Science Inc. Massachusetts (120 pages). Drake et al., The Shunt Book, COPYRGT. 1995 Blackwell Science Inc. Massachusetts. Final Rejection dated Dec. 10, 2015 for Related U.S. Appl. No. 14/709,220. Fossa Medical Welcome to Fossa Medical.com @ URL <http://www.fossamedical.com/news.htm. from Sep. 9, 2005 (retrieved on Sep. 18, 2008). Gailloud P. Hillis, A., Perler, B.; and Murphy, K.J. "Vertebrobasilar Stroke as a Late Complication of a Blalock-Taussig Shunt", Wiley-Liss, Inc., 2002, pp. 231-234. Gailloud, P.; Hillis, A.; Perler, B.; and Murphy, K.J. "Vertebrobasilar Stroke as a Late Complication of a Blalock-Taussig Shunt", Wiley-Liss, Inc., 2002, pp. 231-234 (4 pages). Google's Cache of http://wwvv.google.ca/search?q=%houndsfield+unit% 22&ie=UTF-8&oe=UTF8&h1=en&meta=, Mar. 21, 2003 (60 pages). Google's Cache of http://www.bicetre.neuroradio.net/french/journal/menu.htm, at http://www.google.com/search?q=cacheU20QR8v7hYC; www.bicetre.neuroradio.net/french/journal/menu.htm=MCTA+anglography&hi=en&ie+UTF8, Index MARS 2002 (5 pages). Google's Cache of http://www.bicetre.neuroradio.net/french/journal/menu.htm. at http://www.google.com/search?q=cacheU20CIRBv7hYC; www.bicetre.neuroradio.net/french/journal/menu.htm+MCTA+angiography&h1=en&ie+UTF8, Index MARS 2002 (5 pages). Google's Cache of http://www.bicetre.neuroradio.net/french/journal/menu.htm. at http://www.google.com/search?q=cacheU20QRBv7hYC; www.bicetre.neuroradio.net/french/journal/menu.htm+MCTA+angiography&h1=en&ie+UTF8, Index MARS 2002 (5 pages). Google's Cache of http://www.google.ca/search ?q=%houndsfield+unit%22&ie=UTF-8&oe=UTF8&hl=en&meta=, Mar. 21, 2003 (60 pages). Google's Cache of http://www.google.ca/search?q=%houndsfield+unit%22&ie=UTF-8&oe=UTF8&hl=en&meta=, Mar. 21, 2003 (60 pages). H. Brem, et al. Biocompatibility of Biodegradable controlled released polymer in the rabbit brain, http://www.ncbi.nlm.nih.gov:80/entrz/guery.fcgi?cmd=Retrieve&db=PubMed&list-uids=2772427&dopt=Abstract, last visited on Oct. 3, 2003 (1page). H. Brem, et al., "Biocompatibility of a biodegradable, controlled-release polymer in the rabbit brain," http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list-uids=2772427&dopt=Abstract, last visited on Oct. 3, 2003 (1 page). H. Brem, et al., "Biocompatibilty of a biodegradable, controlled-release polymer in the rabbit brain," http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=-Retrieve&db=PubMed&list-uids=2772427&dopt=Abstract, last visited on Oct. 3, 2003 (1 page). Henry Brem, et al. "Polymer-Based Drug Delivery to the Brain," Science & Medicine, Inc., vol. 3, No. 4, p. 1-11, Jul./Aug. 1996. Henry Brem, et al., "Polymer-Based Drug Delivery to the Brain," Science & Medicine, Inc., vol. 3, No. 4, p. 1-11, Jul./Aug. 1996 (11 pages). Hiroji Miyake, MD, "A New Ventriculoperitoneal Shunt with a Telemetric Intracranial Pressure Sensor: Clinical Experience in 94 Patients with Hydrocephalus"; Neurosurgery. vol. 40, No. 5, May 1977, pp. 931-935 (5 pages). Ilyas Munshi, et al., "Intraventricular Pressure Dynamics in Patients with Ventriculopleural Shunts: A Telemetric Study"; Pediatric Neurosurgery; 1998; vol. 28; pp. 67-69 (3 pages). J. Golzarian, "Imaging After Endovascular Repair of Abdominal Aortic Aneurysm," Abdominal Imaging 28, p. 236-243, 2003 (8 pages). J. Golzarian, Imaging After Endovascular Repair of Abdominal Aortic Aneurysm, Abdominal Imaging 28, p. 236-243 (2003). Jens Rodenwaldt, "Multislice Computed Tomography of the Coronary Arteries," Eur Radial (2003) 13:748-757, Jan. 2003 (10 pages). Jens Rodenwaldt, "Multislice Computed Tomography of the Coronary Arteries," Eur Radiol (2003) 13:748-757, Jan. 2003 (10 pages). Jens Rodenwaldt, Multislice Computed Tomography of the Coronary Arteries, Eur Radiol (2003) 13:748-757, Jan. (2003). John Summerscales, Non-Destructive testing of fibre-reinforced plastic compoltes, vol. 2; Elsevier Science Publishers Ltd., 1990; p. 208. John Summerscales, Non-destructive testing of fibre-reinforced plastics composites, vol. 2; Elsevier Science Publishers Ltd, 1990; p. 208. Jonette Foy, Ph.D., "Drug-Eluting Stents: Pre-Clinical Standards & Recommended Studies," FDA/SIR Device Forum Meeting, Nov. 2002 (7 pages). Jonette Foy, Ph.D., "Drug-Eluting Stents: Pro-Clinical Standards & Recommended mended Studies," FDA/SIR Device Forum Meeting, Nov. 2002 (7 pages). Jonette Foy, Phd., "Drug-Eluting Stents: Pre-Clinical Standards & Recommended Studied." FDA/SIR Device Forum Meeting, Nov. 2002. Jose M. Montes, MD, et al., "Stereotactic Computed Tomographic-Guided Aspiration and Thrombolysis of Intracerebral Hematoma: Protocol and Preliminary Experience", Stroke. Apr. 2000; pp. 834-840 (8 pages). Jose M. Montes, MD; John H. Wong, MD; Pierre B, Fayed, MD; Issam A. Awad, MD; Stereotactic Computed Tomographic-Guided Aspiration and Thrombolysis of Intracerebral Hematoma Stroke Apr. 2000; vol. 31; pp. 834-840. Jose M. Montes, MD; John H. Wong, MD; Pierre B. Fayad, MD Issam A. Awad, MD; "Stereotactic Computed Tomographic-Guided Aspiration and Thrombolysis of Intracerebral Hematoma" Stroke. Apr. 2000; vol. 31: pp. 834-840. Knez A, Becker A, Becker C, Leber A Boekstegers P, Reiser M, Steinbeck G., Detection of Coronary Calcinosis with Multislice Spiral Computerization Tomography: An Alternative to Electron Bean Tomography, Z. Kardiol, Aug. 2002:01 (8): 642-9. Knez A, Becker A, Becker C, Leber A, Boekstegers P, Reiser M, Steinbeck G Detection of coronary calcinosis with multislice spiral computerized tomography: an alternative to electron beam tomography Z Kardiol. Aug. 2002:91 (8):642-9. Knez et al., Detection of coronary calcinosis with Multislice Spiral CT: an alternative method to electron beam tomography, Z Kardiol. Aug. 2002;91(8):642-9 (8 pages). Kopp A F, Ohnesorge B, Flohr T, Georg C, Schroeder S, Kuttner A, Martensen,J, Claussen C D Cardiac multidetector-row CT: first clinical results of retrospectively ECG-gated spiral with optimized temporal and spatial resolution, Rofo Fortschr Geb Rontgenstr Neuen Bildgeb Verfahr May 2000 172(5) 429-35. Kopp A F, Ohnesorge B, Flohr T, Georg C, Schroeder S, Kuttner A, Martensen,J, Claussen C D., Cardiac multidetector-row CT: first clinical resuts of retrospectively ECG-gated spiral with optimized temporal and spatial resolution, Rofo Fortschr Geb Rontgenstr Neuen Bildgeb Verfahr. May 2000; 172(5):429-35. Kopp A F, Schroeder S, Kuettner A, Baumbach A Georg C, Kuzo R, Heuschmid M, Ohnesorge B, Karsch K R, Claussen C D, Non-invasive Coronary Angiography with High Resolution Multidetector-Row Computed Tomography Results in 102 Patients, Eur Heart J., Nov. 23, 2002; (21); 1714-25. Kopp A F, Schroeder S, Kuettner A, Baumbach A, Georg C, Kuzo R, Heuschmid M, Ohnesorge B, Karsch K R, Claussen C D, Non-invasive coronary angiography with high resolution multidetector-row computed tomography. Results in 102 patients. Eur Heart J. Nov. 23, 2002;(21): 1714-25. Kopp et al., Cardiac Multidector-Row CT; Retrospectively ECG-Gated Spiral with Optimized Temporal and Spatial Resolution; First Clinical Results, Roto Fortechr Geb Rontgenstr Neuen Bijldgeb Verfahr, May 2000; 172(5):429-35 (7 pages). Kopp et al., Non-invasive coronary angiography with high resolution multidetector-row computed tomography. Results in 102 patients. Eur Heart J. Nov. 23, 2002;(21): 1714-25 (12 pages). Langer R. Brem et al., "Biocompatibility of Polymeric Delivery System for Macromolecules,"http://www.ncbi,nlm.nlh.gov:80/entrez/query.fcgi?cmd=Retrieve&db=PubMedlist-uids+2772427&dopt=Abstract, last visited on Oct. 3, 2003 (1 page). Langer R. Brem, et al., "Biocompatibility of Polymeric delivery systems for macromolecules," http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list-uids=7348718&dopt=Abstract, last visited on Oct. 31, 2003 (1 page). Langer R. Brem, et al., "Biocompatibilty of Polymeric delivery systems for macromolecules," http://www.ncbi.nlm.nih.gov;80/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list-uids=7348718&dopt=Abstract, last visited on Oct. 31, 2003 (1 page). Lee, Y. et al., Synthesis of 188Re-labelled long chain alkyl diaminedithiol for therapy of liver cancer, Nuclear Medicine Communications, Mar. 2002-vol. 23, Issue 3, pp. 237-242. Lee, Y., et al, Synthesis of 188 Re-labelled long chain alkyl diaminedithiol for therapy of liver cancer, Nuclear Medicine Communications, Mar. 2002-vol. 23, Issue 3, pp. 237-242. Mahnken A H, Sinha A M, Wildberger J E, Krombach G A , Schmitz-Rode T, Gunther R W , The Influence of Motion Artifacts Conditioned by Reconstruction, on the Coronary Clacium Score in Multislice Spiral CT, Rofo Fortschr Geb Rontgenstr Neuen Bildgeb Verahr, Oct. 2001; 173(10): 888-92. Mahnken A H, Sinha A M, Wildberger J E, Krombach G A, Schmitz-Rode T, Gunther R W, The influence of motion artifacts conditioned by reconstruction, on the coronary calcium score in multislice spiral CT, Rofo Fortschr Geb Rontgenstr Neuen Bildgeb Verfahr, Oct. 2001;173(10):888-92. Mahnken et al., The influence of motion artifacts conditioned by reconstruction, on the coronary calcium score in multislice spiral CT, Rofo Fortschr Geb Rentgenstr Neuen Bildgeb Varfahr, Oct. 2001;173(10):888-92 (5 pages). Michael Strotzer, MD, et al., "Appearance of Vascular Stents in Computed Tomographic Angiography: In Vitro Examination of 14 Different Stent Types," Investigative Radiology, vol. 36: (11) p. 652-658, Nov. 2001, (6 pages). Michael Strotzer, MD., et al, "Appearance of Vascular Stents in Computed Tomographic Angipography: In Vitro Examination of 14 Different Stent Types," Investigative Radiology, vol. 36: (11) pp. 652-658, Nov. 2001. Miyake, Hiroji MD; Ohta, Tomio MD; Kajimoto, Yoshinaga MD; Matsukawa, Masanori MD; "A New Ventriculoperitoneal Shunt with a Telemetric Intracranial Pressure Sensor: Clinical Experience in 94 Patients with Hydrocephalus"; Neurosurgery. 40(5): 931-935, May 1997. Miyake, Hiroji MD; Ohta, Tornio MD; Kajimoto, Yoshinaga MD; Matsukawa, Masanori MD; "A New Ventriculoperitoneal Shunt with a Telemetric Intracranial Pressure Sensor: Clinical Experience in 94 Patients with Hydrocephalus"; Neurosurgery. 40(5): 931-935, May 1997. Moses et al., Sirilumus eluting stents versus stanrd stents in patients with stenosis of the coronary artery, New England Journal of Medicine, p. 1315-1323 Oct. 2, 2003 vol. 349, No. 14. Moses et al., Sirlumus Eluting Stents Versus Standard Stents in Patients with Stenosis of the Coronary Artery, New England Journal of Medicine, p. 1315-1323, Oct. 2, 2003 vol. 349. Moses et al., Sirollmus-eluting stents versus standard stents in patients with stenosis in a native coronary artery, The New England Journal of Medicine, p. 1315-1323 Oct. 2, 2003 vol. 349, No. 14 (9 pages). Munshi I, Lathrop D, Madsen JR, Frim DM; "Intraventricular Pressure Dynamics in Patients with Ventriculopleural Shunts: A Telemetric Study"; Pediatric Neurosurgery; 1998; vol. 28; pp. 67-69. Munshi I, Lathrop D, Madsen Jr, Frlm DM; "Intraventricular Pressure Dynamics in Patients with Ventriculopleural Shunts: A Telemetric Study"; Pediatric Neurosurgery; 1998; vol. 28; pp. 67-69. Neal J. Naff, MD, et al., "Treatment of Intraventricular Hemorrhage With Urokinase: Effects on 30-Day Survival"; Stroke. Apr. 2000, pp. 841-847 (8 pages). Neal J. Naff, MD; Juan R. Carhuapoma, MD; Michael A. Williams, MD; Anish Bhardwaj, MD; John A. Ulatowski, MD, PhD; Joshua Bederson, MD; Ross Bullock, MD; Erich Schmutzhard, MD; Bettina Pfausler, MD; Penelope M. Keyl, PhD; Stanley Tuhrim, MD Daniel F. Hanley, MD ; "Treatment of Intraventricular Hemorrhage With Urokinase Effects on 30-Day Survival"; Stroke. Apr. 2000; vol. 31: pp. 841-847. Neal J. Neff, MD; Juan R. Cerhuapoma, MD; Michael A. Williams, MD; Anish Bhardwaj, MD; John A. Ulaowski, MD, PhD; Joshua Bederson, MD; Ross Bullok, MD; Eric Schmutzhard, MD; Bettina Pfausler, MD; Penelope M. Keyi, PhD; Stanley Tuhrirn, MD; Daniel F. Hanley, MD; "Treatment of Intraventricular Hemorrhage With Urokinase Effects on 30-Day Survival"; Stroke, Apr. 2000; vol. 31 pp. 841-847. Office Action dated Feb. 28, 2012 for U.S. Appl. No. 11/081,494. Office Action for U.S. Appl. No. 13/047,226 dated Apr. 13, 2011. Ohnessorge B, Flohr T, Becker C, Knox A, Kopp A F, Fukuda K. Reiser M F., Cardiac imaging with rapid, retrospective ECG synchronized multilevel spiral CT Radiologe, Feb. 2000:40(2): 111-7. Ohnessorge et al., Cardiac imaging with retrospectively ECG-gated fast multi-slice spiral CT, Radiologe, Feb. 2000:40(2): 111-7 (7 pages). Patrik Gabikian, MD et al., "Stroke: Prevention of Experimental Cerebral Vasospasm by Intracranial Delivery of Nitric Oxide Donor From Controlled-Release Polymer," http://stroke.ahajournals.org/cgi/content/full/33/11/12681, last visited on Oct. 1, 2003 (11 pages). Patrik Gabikian, MD, et al., "Stroke: Prevention of Experimental Cerebral Vasospasm by Intracranial Delivery of a Nitric Oxide Donor From a Controlled-Release Polymer," http://stroke.ahajoumals.org/cgi/content/full/33/11/2681, last visited on Oct. 1, 2003 (11 pages). Patrik Gabikian, MD. et al., "Stroke: Prevention of Experimental Cerebral Vasospasm by Intracranial Delivery of a Nitric Oxide Donor From a Controlled-Release Polymer," http://stroke.ahajournals.org/cgi/content/full/33/11/2681, last visited on Oct. 1, 2003 (11 pages). Paul P. Wang et al., "Local Drug Delivery to the Brain," Advanced Drug Delivery Review 54 (2002) pp. 987-1013. Paul P. Wang, et al., "Local Drug Delivery to the Brain," Advanced Drug Delivery Review 54 (2002), p. 987-1013, 2002 (27 pages). Paul P. Wang. et al, "Local Drug Delivery to the Brain," Advanced Drug Delivery Review 54 (2002), p. 987-1013, 2002 (27 pages). Quoc-Anh Thai, BA et al., "Inhibition of Experimental Vasospasm in Rats with the Periadventital Administration of Ibprofen Using Controlled-Release Polymers," published by American Heart Association, pp. 140-147 Jan. 1999. Quoc-Anh Thai, BA, et al., "Inhibition of Experimental Vasospasm in Rats with the Periadventital Administration of Ibuprofen Using Controlled-Release Polymers," published by American Heart Association, p. 140-147, Jan. 1999 (8 pages). Rafael J. Tamargo, et al. "The Intracerebral Administration of Phenytoin Using Controlled-Release Polymers Reduces Experimental Seizures in Rats," Epillepsy Research 48, p. 145-155 (2002). Rafael J. Tamargo, et al., "The Intracerebral Administration of Phenytoin Using Controlled-Release Polymers Reduces Experimental Seizures in Rats," Epilepsy Research 48, p. 145-155; 2002 (11 pages). Seung-Jung Park, M.D., et al., "A Paclitaxel-Eluting Stent for the Prevention of Coronary Restenosis," The New England Journal of Medicine, vol. 348:1537-1545, No. 16, Apr. 17, 2003, (3 pages). Seung-Jung Park, MD., et al., "A Paclitaxel-Eluting Stent for the Prevention of Coronary Restenosis," The New England Journal of Medicine, vol., 348:1537-1545 No. 16, Apr. 17, 2003. Stefan Hahnel, et al., "Small-Vessel Slants for Intracranial Angioplasty: In Vitro Comparison of Different Stent Designs and Sizes by Using CT Angiography," AJNR Am J Neuroradiol 24:1512-1516, Sep. 2003 (6 pages). Stefan Hahnel, et al., "Small-Vessel Stents for Intracranial Angioplasty: In Vitro Comparison of Different Stent Designs and Sizes by Using CT Angiography," AJNR AM J Neuroradiol 24:1512-1516, Sep. 2003. Stefan Hahnel, et al., "Small-Vessel Stents for Intracranial Angioplasty: In Vitro Comparison of Different Stent Designs and Sizes by Using CT Angiography," AJNR Am J Neuroradiol 24;1512-1516, Sep. 2003 (6 pages). Stefanie Weigel et al., "Thoracic Aortic Stent Graft: Comparison of Contrast-Enhanced MR Angiography and CT Angiography in the Follow-Up: Initial Results," Eur Radiol (2003) 13: 1628-1634, Feb. 2003. Stefanie Weigel, et al., "Thoracic Aortic Stent Graft: Comparison of Contrast-Enhanced MR Angiography and CT Angiography in the Follow-Up: Initial Results," Eur Radiol (2003) 13:1628-1634, Feb. 2003, (7 pages). Stefanie Weigel, et al., "Thoracic Aortic Stent Graft: Comparison of Contrast-Enhanced MR Angiography and CT Angiography in the Follow-Up: Initial Results," Eur Rudiol (2003) 13:1628-1634, Feb. 2003 (7 pages). Stephen Schroeder, et al. "Influence of Heart Rate on Vessel Visibility in Noninvasive Coronary Angiography Using New Multislice Computed Tomography Experience in 94 Patients," Journal of Clinical Imaging 26 (2002), pp. 106-107. Stephen Schroeder, et al., "Influence of Heart Rate on Vessel Visibility in Noninvasive Coronary Angiography Using New Multislice Computed Tomography Experience in 94 Patients," Journal of Clinical Imaging 26 (2002), p. 106-107, 2002 (2 pages). Stephen Schroeder, et al., "Influence of Heart Rate on Vessel Visibilty in Noninvasive Coronary Angiography Using New Multislice Computed Tomography Expenence in 94 Patients," Journal of Clinical Imaging 26 (2002), p. 106-107, 2002 (2 pages). Stone Sweeper (r) Kidney Stone Removal Device: The Clear Path to Ureteral Patency-Insertion Instructions, at URL, 2004 (retrived on Sep. 18, 2008). Stone Sweeper (r) Kidney Stone Removal Device: The Clear Path to Ureteral Patency-Insertion Instructions, at URL<http://www.fossamedical.com/pdfs/sweepersurgicaltechnique.pdf>, 2004 (retrived on Sep. 18, 2008). Travis S. Tiemey, et al., "Prevention and Reversal of Experimental Posthemorrhagic Vasospasm by the Periadventitial Administration of Nitric Oxide From a Controlled-release Polymer," http://www.neurosurgery-online/fulltext/4904/0945/NURO49040945-doc.html, vol. 49, No. 4, Oct. 2001 (11 pages). Travis S. Tierney, et al., "Prevention and Reversal of Experimental Posthemorrhagic Vasospasm by the Periadventital Administration of Nitric Oxide From a Controlled-release Polymer," http://www.neurosurgery-online/fulltext/4904/0945/NUR049040945-doc.html, vol. 49, No. 4. Oct. 2001 (11 pages)-. Cohen et al. 1984 Magnetic resonance imaging of bone marrow disease in children.
2019-04-21T15:37:45Z
https://patents.google.com/patent/US9375203B2/en
Click on a term for a definition of how it is used in this website. An acid is a substance having a pH of less than 7 when dissolved in water. Ecologists apply the term acidic to plant communities growing in rather acidic and nutrient-poor soils . Acidic soils result from the presence of various inorganic and organic acids. Related to alluvium or floodplains. Sediment deposited by flowing streams or rivers in floodplains and streambeds/riverbeds. Different sizes of sediment particles are deposited, depending on the water velocity related to the location within the floodplain. A former floodplain terrace , or alluvial landform that is no longer subject to flood events, but is instead in the uplands . At Rock Creek Park, some of the highest hills are topped with coarse sand and smooth stones that appear to have been rounded and polished by turbulent water in past ages. Evidence of marine sediments is lacking, so these terraces are believed to have been formed by flowing fresh water, possibly an ancestral Potomac River. A base is a substance having a pH above 7 when dissolved in water. A basic substance is capable of neutralizing acids. Base elements, such as calcium, magnesium, and sodium, are one class of such substances, and are important nutrients for plants. Basic rocks are composed of minerals rich in base elements. Ecologists apply the term basic to plant communities whose presence indicates that the soil contains base elements available in a form plants can use, even if the overall soil pH is not basic. See base , basic , and element . A rock mineral containing base elements. Solid rock that underlies unconsolidated material such as soil and fragmented rock, or is occasionally exposed as rock outcrops. A rock-forming mineral high in iron, magnesium, and potassium, dark brown or black in color. A common kind of ‘mica’ (a group of soft minerals that tend to occur in flakes or sheets. Quoted from Stewart, K.G. and M. Roberson. 2007. Exploring the Geology of the Carolinas. UNC Press, Chapel Hill, NC. Having relatively broad leaves, as opposed to needle-like leaves. Examples of broadleaf evergreen trees or shrubs: American holly, mountain laurel. Examples of broadleaf deciduous trees or shrubs: white oak, pink azalea. A narrow, continuous ring of growth tissues located just inside the external bark of a tree, that is responsible for transporting water, nutrients and food. As a tree ages, the cambium produces wood (to the inside) and bark (to the outside) causing the tree to grow in width. The trees whose crowns intercept most of the sunlight in a forest stand. The uppermost layer of a forest. Adapted from Johnson, P.S., S.R. Shifley, and R. Rogers. 2002. The ecology and silviculture of oaks. Chapter 5. Page 194. CABI Publishing, New York. A temporary opening in the forest canopy , caused by the death or toppling over of one or more canopy trees. To direct the course of a stream through a manmade channel. A citizen scientist is a member of the general public who engages in scientific work, often in collaboration with or under the direction of professional scientists and scientific institutions; an amateur scientist. Quoted from the Oxford English Dictionary. The unique identifier for each natural community . The classification code is used to identify each natural community in the U.S. National Vegetation Classification and other classification systems. The code is written as "CEGL" followed by a series of numbers. CEGL stands for Community Element Global. For example, the classification code for the Mid-Atlantic Mesic Mixed Hardwood Forest is CEGL006075. Pulverized rock fragments; a silky-textured, extremely fine size of rock particle. In U.S. geology, clay is a size label for rock fragments below 1/256 mm. See also cobbles , gravel , sand , and silt . Textural term referring to soil or sediment composed of 28-40 percent clay , and roughly equal parts of sand and silt . A system of tree harvesting that removes all the trees in a given area, as opposed to other systems that leave some trees standing. Adapted from Draper, D.L. 2002. Our Environment: A Canadian Perspective, Second edition. Glossary. Nelson, Scarborough, Ontario, Canada. Describes rock with planes of weakness along which it may split. Describes colonies of plants that appear to be distinct individuals, but are genetically identical, and interconnected underground by specialized roots. Some types of shrubs in the heath family at Rock Creek Park are clonal . Above ground these plants appear to be distinct individuals, but underground they remain interconnected and are all part of the same plant. The easternmost stretch of land in the Mid-Atlantic region that lies between the Piedmont and the Atlantic Ocean. This relatively flat land is comprised of layers of unconsolidated sediments and sedimentary rock that get thicker from west to east. The western boundary of the Coastal Plain is the Fall Zone , where it overlaps the Piedmont bedrock in the vicinity of Washington, D.C. Approximately fist-sized stones. In U.S. geology, cobble is a size label for rock fragments between 64 and 256 mm (about 2.5 to 10 inches). See also gravel , sand , silt , and clay . A blanket of loose stones and soil that moves downslope by a combination of natural processes ( frost action , gravity and hillside creep, and slope wash), locally accumulating to considerable thickness on flatter areas (‘benches’ and toeslopes) in the form of ‘colluvial fans.’ At least a few inches of colluvium are present on many slopes in Rock Creek Park. The simultaneous demand by two or more organisms for limited environmental resources, such as nutrients , living space, or light. Quoted from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. The advantage that a characteristic (or set of characteristics) gives one organism over another in an environment where vital resources such as sunlight, water, nutrients , and space are limited and cannot be shared. For instance, among sun-loving plant species, the ability to grow faster and taller than surrounding plants may be a competitive advantage , giving a plant access to the greatest share of sunlight (and as a consequence, shading the other sun-loving plants, interrupting their growth). Curved inward, like the inside surface of a bowl. Describes the shape of land on some slopes, and especially near the base of a hill ( toeslope ), that tends to collect moisture and fine sediment runoff. A needleleaf tree that bears cones. A pine tree, for example. Curving or bulging outward, like the outside surface of a ball. Describes the shape of land (on some hillsides, for instance) that tends to shed moisture. The steep, concave bluffs found along meandering streams on the outside of a stream bend. They are formed by the erosion of soil as the stream collides with the bank. See also point bars . Referring to a plant that sheds leaves at the end of a growing season and regrows them at the beginning of the next growing season. Most deciduous plants bear flowers and have woody stems and, in this region, have broad rather than needlelike leaves. Maples, oaks, and elms are examples of deciduous trees. Compare evergreen . Adapted from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. Living organisms such as bacteria, fungi, ants, and worms, which are able to break down organic matter that is difficult for other organisms to digest. They typically secrete enzymes onto organic matter (dead organisms or animal or plant wastes) and then absorb the breakdown products. Decomposers fulfill a vital role in the ecosystems, returning the constituents of organic matter to the environment as inorganic nutrients that can be used again by plants. See also nutrient cycles . Adapted from Dictionary of Biology. 2004. Fifth edition. Oxford University Press. The breakdown of dead organisms or animal or plant wastes ( organic matter ) into inorganic nutrients by the action of decomposers , so that the nutrients can be used again by plants. See also nutrient cycles . Adapted from Dictionary of Biology. 2004. Fifth edition. Oxford University Press. Debris or litter of biological origin, such as leaf litter, animal waste, or dead organisms. Adapted from Allaby, M. 1994. The Concise Oxford Dictionary of Ecology. Oxford University Press. The amount of time that lapses between repeat episodes of a natural disturbance to a plant community. Where, for instance, long intervals between flood events are the norm, the vegetation will look quite different than where flooding occurs more frequently. (As used here) A redirected section of a stream, either temporary or permanent (usually for construction purposes). Oak species that are known for their ability to thrive on nutrient-poor, dry sites. Examples in the Mid-Atlantic region of the U.S. include: chestnut oak, scarlet oak, black oak, and post oak, and to some extent, white oak. Decaying leaves and branches covering a forest floor. Quoted from The American Heritage Dictionary of the English Language. 2006. Fourth edition. Houghton Mifflin Company, Boston. A boundary where two or more habitats meet. (For example, where a natural community meets another natural, semi-natural, or non-natural plant community or disturbed area.) Some animal species prefer edge habitats—where a meadow borders a forest, for instance—with access to resources that aren’t found in a single natural community. EDRR stands for Early Detection and Rapid Response—a strategy of watchfulness and quick eradication to keep newly-arrived species of weedy non-native invasive plants from getting established on a site. " EDRR species " is a nickname for species that are being targeted for management by this strategy. A substance whose structure is made up of only a single type of atom. For example, the mineral copper, which is made up of 100 percent the element copper (and no other substances), is known as an element. The ‘periodic table of the elements’ is a layout of all the elements. Adapted from The Mineral and Gemstone Kingdom. Glossary. 2017. The flow of energy in an ecosystem from producers to users. Energy on Earth (in most ecosystems) is derived from the sun, which energizes plants to convert inorganic nutrients into plant tissue ( photosynthesis ); those plants feed (i.e., energize) animals, which feed other animals, etc. There is some loss of energy each time it is transferred—for instance, only a small percentage of sunlight that strikes plants is used by the plant for photosynthesis; and animals can’t digest every particle of plant or animal they consume. A constant input of new energy from sunlight is required to continue the energy flow on Earth; energy is not completely recycled the way nutrients are. See also nutrient cycles and food chain . Adapted from Marietta College Department of Biology and Environmental Science. 2017. Biomes of the World—Ecology Pages. Environmental Biology—Ecosystems. Having green leaves or needles all year. Evergreen trees lose their leaves individually on an ongoing basis, rather than losing all of them in a short period at the end of a growing season in the manner of deciduous trees. Evergreen plants may be broadleaf or needleleaf . Quoted, in part, from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. Not native to a region; being from another part of the world. “Exotics” can be a synonym for ‘non-native plants.’ See also non-native invasive plant . Lava; magma that cooled rapidly above-ground, producing a fine-grained igneous rock . (Compare intrusive rock .) Adapted from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. (As used here) A natural process whereby the presence of one organism facilitates or helps another organism to grow. Both organisms may benefit from the interaction, or just one. The Fall Zone (sometimes called the fall line ) is the boundary between the Piedmont and the Coastal Plain ; often marked by falls and rapids as rivers leave the hard rocks of the Piedmont and step down into the more easily eroded sediments of the Coastal Plain. Quoted from Stewart, K.G. and M. Roberson. 2007. Exploring the Geology of the Carolinas. UNC Press, Chapel Hill, NC. A fracture in the earth which shows evidence of the movement of blocks of rock relative to one another. Earthquakes are the result of shifting along faults (‘fault motion’). All the different kinds of animals of a particular area. (As usedhere) A circular chain of cause-and-effect, in which the outcome of some natural processes affect other processes, which in turn affect the original processes. Any low plants—including herbaceous plants as well as new tree seedlings or small shrubs. This vegetation layer provides important cover for birds and small mammals in some natural communities . Efforts to control and extinguish fires, usually to prevent loss of human life and property. What happens in the natural interplay between rivers or streams and their floodplains. The higher parts of a modern stream valley (farther from the stream channel than the floodway ), which may be inundated only infrequently, e.g., a 10-year floodplain, a 25-year floodplain, etc. That part of a modern floodplain adjacent to the active stream channel, and which is typically inundated annually or more frequently, whenever the stream overflows its banks. All the different kinds of plants of a particular area. Easily crumbled or broken apart into small pieces. A type of weathering caused by the swelling and shrinking of moisture as it freezes and thaws in soil or in surface pores and cracks of rock. A dark colored, coarse grained igneous rock . (The intrusive equivalent of basalt—a dark volcanic (extrusive) rock.) Composed of minerals such as biotite and hornblende . A phase of forest regeneration , during which trees begin to colonize gaps created by fallen trees. Adapted from Allaby, M. 1994. The Concise Oxford Dictionary of Ecology. Oxford University Press. See also canopy gap . Underlying geologic material. For instance, the bedrock that lies beneath the soil (if any) in which a natural community grows. See also substrate . The study of the form and origin of landscapes, and the arrangement of geologic materials and processes on them. The growth of a seed into a seedling . A group of medium-textured soils found throughout the uplands of Rock Creek Park, whose parent material is acidic bedrock . Small stones or pebbles. In U.S. geology, gravel (or sometimes pebble) is a size label for rock fragments between 4 and 64 mm (about 1/6 inch to 2.5 inches). See also cobbles , sand , silt , and clay . Water that percolates down into the earth through permeable layers and cracks until it encounters a layer that water cannot penetrate. There it collects and/or flows horizontally, filling the porous spaces in soil , sediment , and rocks. Groundwater originates from rain and from melting snow and ice and is the source of water for aquifers, springs, and wells. Adapted, in part, from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. As it flows, groundwater can re-surface. It often carries with it dissolved base minerals leached from soils and bedrock . See also groundwater discharge , groundwater recharge , water table . Groundwater that re-surfaces as seepage, or even as a flowing spring. It may surface in such places as ravines, along the bottoms of hillsides, and in stream or river channels. Where a road or trail has been cut through a hillside of layered rock, you may be able to see groundwater seeping from between the layers of exposed rock for days after a good rain. The physical, chemical and biological processes that occur in or produce groundwater . The process of precipitation replenishing the groundwater supply as rain, melting snow, and melting ice soak into the earth. The period of the year when climatic conditions are favorable for plant growth. Quoted, in part, from McGraw-Hill Dictionary of Scientific and Technical Terms. 2003. Sixth edition. The McGraw-Hill Companies, Inc, New York. For instance, in this part of the world, generally the time period between the last freeze in the spring and the first frost in the autumn. The growing season can vary by plant species, as different plants have different tolerances for freezing temperatures. A term used to describe the hard-shelled fruits of plants such as the seeds of beech and oak. Hard mast is an especially important wildlife food in the fall and winter. It is high in fat content and is available when other plant foods (fleshy fruits and foliage) are not available. Quoted from Algonquin Provincial Park’s Online Learning Centre. 2009. The Science Behind Algonquin’s Animals. Glossary. A synonym for a broadleaf deciduous tree (or referring specifically to its wood). A member of the plant family Ericaceae, made up of mostly shrubs and small trees and including azaleas, rhododendrons, mountain laurel, blueberries, and huckleberries. Botanists sometimes refer to members of this family as ‘heaths.’ Since they usually thrive on acidic soils with few nutrients , they can help indicate soil quality. The consumption of plants by animals. A soil layer . See also soil profile . The continuous process by which water is circulated throughout the Earth and its atmosphere. Also known as the ‘water cycle.’ ’ Adapted from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. The study of water (in all its forms) on the Earth’s surface, in the soil and underlying rocks, and in the atmosphere. Rock that crystallized from hot magma . Can include lava (which solidifies after reaching earth’s surface) or intrusive rock (which solidifies before reaching earth’s surface). At Rock Creek Park, most types of igneous rock (except quartz ) contain minerals that become valuable plant nutrients as they weather into soil . An adjective describing a natural or man-made layer that stops rainwater from soaking in or moving through it. Examples of man-made impervious surfaces are rooftops, parking lots, and streets. Examples of natural impervious layers beneath the soil are clay and shale . A block of rock trapped in another kind of rock. Species that, when present, pretty much guarantee you are in a certain natural community , or that certain environmental conditions are present (such as nutrient-rich soils ). Soil that is unable to provide nutrients in a form that plants can use, or in quantities sufficient to meet the needs of many plants, is called nutrient-poor or infertile . The westernmost part of the Atlantic Coastal Plain , closest to the Piedmont . An invertebrate animal that, as an adult, has 6 segmented legs, a three-part body, compound eyes, and two antennae. Adapted from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. An animal such as an insect , worm , shellfish, or snail, which has no backbone. Adapted from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. The International Terrestrial Ecological System Classification—a standard classification system for ecological systems . The International Vegetation Classification [of which the U.S. National Vegetation Classification ( NVC ) is a subset]—a standard used internationally by ecologists to classify (categorize) and map natural communities . One of the major intrusive rock units in the greater D.C. area. Medium to coarse-grained, it includes crystals of light and dark minerals (including quartz and biotite ), giving it a granite-like appearance. Named for the village of Kensington, MD. A type of acidic bedrock at Rock Creek Park, metamorphic in origin, and containing fragments and inclusions of various exotic rocks. Widespread in the eastern Piedmont between D.C. and Baltimore, and named for the town of Laurel, MD. Restricted to the east side of the Rock Creek shear zone. A height category ecologists use to describe plants in a natural community (as in canopy layer , shrub layer, field layer ). The process by which soluble parts are dissolved out from rocks, soils , or other matter as water or other liquid passes through slowly. Adapted from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. A long ridge of sand , silt , and clay built up by a river or stream along its banks, especially during floods. Adapted from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. Textural term referring to soil or sediment composed of roughly equal parts of sand (22-52 percent), silt (28-50 percent), and clay (8-27 percent). Sandy loam contains 50-85 percent sand, silt loam contains 50-85 percent silt, while clay loam is defined as having 28-40 percent clay. Rock with significant concentrations of iron and magnesium, making it heavy and dark. Molten rock underground. Quoted from Stewart, K.G. and M. Roberson. 2007. Exploring the Geology of the Carolinas. UNC Press, Chapel Hill, NC. A phenomenon in which large numbers of trees bear a lot of fruit in a particular year despite no seasonal change in temperature or rainfall; this does not occur every year but at intervals of two to 10 years. See also hard mast and soft mast . Adapted from Wisconsin Primate Research Center Library. 2009. Primate Info Net: Primate Factsheets Glossary. University of Wisconsin-Madison. When said of habitats, mesic means having a moderate or well-balanced supply of moisture. When said of plants, mesic means requiring a moderate or well-balanced supply of moisture. (As used here) Oak species that thrive on mesic sites. Examples at Rock Creek Park include: northern red oak and white oak. Any rock derived from other rocks by metamorphosis. See also metasedimentary rock and metavolcanic rock . Metamorphism is the process of change when rock is subjected to enough heat and pressure to change the minerals, textures, or structures without melting the rock. Deep burial or stress from fault motion can cause rock to undergo metamorphism, i.e., to metamorphose . Shorthand for metamorphosed sedimentary rock (i.e., metamorphic rock derived from sedimentary rock). Any sedimentary rock, such as shale or sandstone, which has been subjected to enough heat and pressure to change some or all of the minerals, textures, or structures without melting the rock. Shorthand for metamorphosed volcanic rock (i.e., metamorphic rock derived from volcanic rock). Any rock of volcanic origin which has subsequently been subjected to enough heat and pressure to change some or all of the minerals, textures, or structures without melting the rock. Local climatic effects associated with or caused by a specific landform. A microscopically small organism, such as a bacterium. (As used here) A site where great numbers of migrating birds may stop over for rest during their long flights in spring and fall. Any naturally occurring inorganic substance with an arrangement of atoms (chemical structure) that can be exact, or can vary within limits. Quartz and feldspar are examples of minerals. Elements that occur naturally as crystals are also considered minerals. A rock is mainly composed of minerals. The terms sand , silt , and clay can refer to specific particle sizes of minerals. Adapted, in part, from The Mineral & Gemstone Kingdom. Glossary. Any soil consisting primarily of mineral material (particles of weathered rock—sand, silt , and clay ) rather than organic matter (decomposed plant or animal matter). Decomposed organic matter (plant and animal). Pertaining to a mutually beneficial relationship between plant roots and fungi (plural for fungus). Plants support fungi by providing sugar and a hospitable environment. Fungi support plants by providing increased surface area for water uptake and by selectively absorbing essential minerals. Quoted from Plantlife. 2009. Lustrous rock, severely flattened, cleaved , and stretched, formed by the shifting of rock layers along a fault. Any kind of rock can become mylonite . (As used here) A plant species that occurs naturally in a particular region, and whose presence is not traceable to human actions, either directly or indirectly. A ‘community’ of native plants that recurs in the landscape with similar species composition and physical structure. Occurrences of a natural community also tend to share characteristic environmental features such as bedrock geology, soil type, and topographic position, and to have natural processes in common such as climate, means of energy flow , nutrient cycling, and water cycling. Each supports certain kinds of wildlife. Adapted from Canada. Ministry of Forests and Range. 2008. Glossary of Forestry Terms in British Columbia, March 2008. Natural events such as fire, severe drought, insect or disease attack, or wind that periodically disrupt natural communities or entire landscapes, impacting them to greater or lesser degrees, for greater or lesser periods of time. See also scale . Adapted from U.S. Forest Service. Cleveland National Forest Land Management Plan, Part 3 – Design Criteria for Southern California National Forests, Appendix L—Glossary (M–R). A descriptive study of nature, based more on observation than experimentation. It may include elements of biology, geology, climatology, ecology, and more. A ‘naturalist’ is a person who studies natural history . A process existing in or produced by nature (rather than by the intent of human beings), e.g., evaporation, volcanic activity. Adapted, in part, from WordWebOnline.com. 2017. Describes a non-native species of plant or animal that has permanently established itself in a region by successfully reproducing and living alongside native plants in the wild. Some naturalized plants become aggressive invaders. Adapted, in part, from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. A plant or animal that readily establishes itself in a new region to which it is not native. (As used here) A species (plant or animal) that was not historically present naturally in a particular region. Rather, its presence can be traced to human activity, either directly or indirectly. See also non-native invasive plant . A plant species that not only is non-native to a particular region, but also aggressively multiplies or spreads there, becoming a weedy pest and threatening the well-being of native populations of plants. The natural recycling of nutrients (chemical elements and molecules) on our planet. Carbon, nitrogen, phosphorus, and even water (hydrogen and oxygen) are just a few of the classic ‘nutrients’ whose pathways of movement are studied. As they are used in living organisms or non-living geological processes, nutrients are never ‘used up,’ but are either released and re-used elsewhere, or are held for long periods of time (such as in rock). Adapted from Marietta College Department of Biology and Environmental Science. 2017. Biomes of the World—Ecology Pages. Environmental Biology—Ecosystems. See also food chains and energy flow . (As used here) Chemical elements and molecules, in forms that plants can use. (‘Inorganic nutrients .’) At least sixteen are known to be essential to a plant’s well-being, even if in tiny amounts. Some come directly from carbon dioxide in the air and from water (hydrogen, oxygen, carbon). Others come from soil , dissolved in water (nitrogen, phosphorus, potassium, calcium, sulfur, magnesium, iron, zinc, manganese, copper, and other micro-nutrients). Groundwater dissolves some of these out of soil or bedrock . The U.S. National Vegetation Classification [a subset of the International Vegetation Classification ( IVC )]—a standard used nationwide by ecologists and the federal government to classify (categorize) and map natural communities . (As used here) Residue from living organisms (plants and animals) decomposing in the soil , along with living microorganisms. The easternmost part of the Piedmont , closest to the Atlantic Coastal Plain . Rock or sediment from which a soil has formed. The standard measure of acidity of a substance. On a scale of 1-14, 7 is neutral; anything that tests lower than 7 is technically acidic , and anything higher than 7 is alkaline (or basic ). See also acid , acidic, and base , basic. The process by which a plant makes its own food. Green chlorophyll pigment in the leaves absorbs light energy, which is used to fuel sugar production. A plant ‘photosynthesizes’ during most of its growing season . The scientific study of the natural features of the Earth’s surface. (Physical geography.) Adapted from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. ‘Physiographic regions’ are broad- scale subdivisions based on terrain texture, rock type, and geologic structure and history. Quoted from U.S. Geological Survey. 2000. A Tapestry of Time and Terrain. Near D.C., the three physiographic regions are the Blue Ridge Mountains, the Piedmont , and the Coastal Plain . The Fall Zone separates the latter two. The land bounded by the Blue Ridge Mountains to the west, and the flatter Atlantic Coastal Plain to the east. This rolling terrain is underlain by solid bedrock , which is thought to be the eroded ‘roots’ of an ancient mountain range. A plant or animal species that colonizes land that has been cleared or somehow severely disturbed. Typically sun-loving, fast-growing species, sometimes thought of as ‘weedy.’ Around the D.C. area, ‘pioneers’ include native plants such as wild blackberries, Virginia pine, tuliptree, and lots of grasses and other herbaceous species. Non-native plants have become some of the most aggressive pioneers in recent decades: Japanese stiltgrass, garlic mustard, Asiatic bittersweet, Japanese honeysuckle, multiflora rose, tree-of-heaven, princess tree, mimosa tree. Crescent-shaped deposits along meandering streams, on the inside of stream bends. See also cut banks . An animal, such as a woodpecker or chickadee, that can make its own tree cavity (hollow) in which to nest. See also secondary cavity nester . The activity of green plants and algae that produce and store their own energy (food) from sunlight and non-living chemicals, rather than consuming other organisms to meet their energy needs. Primary producers are therefore the base of the food chain . Any plant structure (such as a bulb, root, etc.) with the capacity to give rise to a new plant, e.g., a seed, a spore, part of the vegetative body capable of independent growth if detached from the parent. Adapted from Biology-Online Dictionary. 2017. Hard white rock at Rock Creek Park that weathers to an extremely acidic soil . (One of the most common minerals on earth (SiO2).) Quoted from Stewart, K.G. and M. Roberson. 2007. Exploring the Geology of the Carolinas. UNC Press, Chapel Hill, NC. Fractures in rock that are filled with milky white quartz . Quartz veins form when hot silica-rich water moves through cracks; as the water cools, quartz is deposited. Quoted from Stewart, K.G. and M. Roberson. 2007. Exploring the Geology of the Carolinas. UNC Press, Chapel Hill, NC. A radio button is a button the user selects, to the exclusion of the other radio buttons . (You cannot select two at once.) In this website, radio buttons are circular. A process in rocks that occurs during metamorphism , whereby the structure, but not the composition, of the minerals in a rock is altered due to incredible heat and pressure. The continuous renewal of a forest stand. Natural regeneration occurs gradually with seeds from the same or adjacent stands or with seeds brought in by wind, birds, or animals, and with stump-sprouting. Adapted from North Carolina Forestry Association. 2017. Forest Management Basics. (As used here) The process by which an organism breathes or somehow exchanges gases, especially carbon dioxide and oxygen, with the environment. Adapted from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. The microscopically thin biologically active zone of soil and microbes on and around plant roots. On, near, or related to the banks of a river. Any place where bedrock is exposed on the surface of the Earth. Tiny but gritty particles of rock. In U.S. geology, sand is a size label for rock fragments between about 1/16 mm and 2 mm (about 1/16 inch or less, but gritty). See also cobble, gravel , silt , and clay . The magnitude of a natural process in terms of extent (how large an area affected), or frequency/duration (how frequent the recurrence, or how long the process or impact lasts). Also, the intensity (how severe the impact) of a natural disturbance . (As usedhere) Pools of water that periodically dry up; therefore usually lacking fish. Particularly important habitat to some salamanders, frogs, and toads. An animal, such as a raccoon, bluebird, or wood duck, that is unable to excavate its own tree cavity (hollow) in which to nest, and so uses cavities that other animals or natural processes have created. See also primary cavity nester . (As used here) Loose particles of varying sizes including bits and pieces of rocks and minerals and organic matter such as shells. Includes sand , silt , and clay . Quoted, in part, from Stewart, K.G. and M. Roberson. 2007. Exploring the Geology of the Carolinas. UNC Press, Chapel Hill, NC. The movement of sediment and the processes that govern their motion. Sediment transport is typically due to a combination of the force of gravity acting on the sediment, and/or the movement of the air, water, or ice in which the sediment is carried. The force of gravity is due to the sloping surface on which the particles are resting. Adapted from Wikipedia contributors, "Sediment transport," Wikipedia, The Free Encyclopedia (accessed November 17, 2009). Rock that is formed when organic or inorganic sediments are compressed (or ‘lithified’) into layered solids by the weight of overlying material such as other rocks. It forms at pressures and temperatures that do not destroy fossil remnants. See also igneous rock and metasedimentary rock for contrasts. (As used here) The reservoir of viable seeds present in the soil . Adapted from Encyclopedia.com. The method by which a plant scatters its offspring away from the parent plant to reduce competition . Methods include wind, insects, animals, tension, and water. Adapted from Encyclopedia.com. 2017. (As used here) A baby tree or shrub . A vegetation community that, although largely comprised of native plants, owes its present form to historic human manipulation or severe natural disturbances . It is not considered a long-lasting community, but rather is giving way (or succeeding) to another, more natural community as natural ecological processes take their course over the years. May also be called a ‘successional community.’ See also succession . The aging and deterioration of plants. Late or delayed in developing or blooming. More specifically, it can refer to a pine cone or other seed case that requires heat from a fire to eventually open and release the seed. Quoted from Natural Resources 743 - Definitions. 2009. University of Wisconsin at Stevens Point. A fine-grained sedimentary rock , consisting of compacted and hardened clay , silt , or mud. Shale forms in many distinct layers and splits easily into thin sheets or slabs. Quoted from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston. Woody plants which are typically multi-stemmed (and typically shorter than many trees). They comprise an important layer of many natural communities , providing cover for birds and small mammals. Tiny particles of rock so fine they are not gritty to the touch. In U.S. geology, silt is a size label for rock fragments between 1/256 mm and 1/16 mm. See also cobbles , gravel , sand , and clay . Textural term referring to soil or sediment composed of 50-85 percent silt , and roughly equal parts of sand and clay . Textural term referring to soil or sediment composed of mostly silt and clay with little sand . The direction a hill’s slope faces. (The orientation of a slope relative to the four points of the compass.) E.g., a slope with a northerly aspect faces north. Refers to where something lies on a slope, e.g., high-slope, mid-slope, or low-slope. All processes and events by which the configuration of the slope is changed; especially processes by which rock, surficial materials and soil are transferred downslope under the dominating influence of gravity. Quoted from Canada. Ministry of Forests and Range. 2008. Glossary of Forestry Terms in British Columbia, March 2008. Seeds that are covered with fleshy fruit, as in holly berries or blueberries. Unconsolidated mineral and organic sediment on the surface of the earth. Soil’s pH , texture, and biological activity are shaped by soil -forming processes, and matter greatly to plants. The particular arrangement of layers (or ‘horizons’) produced during the soil forming process. The basic unit of soil mapping and classification. Each soil series is comprised of soils which have similar soil profile characteristics and share a common parent material . A spike of small flowers growing on a fleshy axis, in plants such as skunk-cabbage and Jack-in-the-pulpit. Usually enclosed in a spathe . Hooded appendage of a plant such as skunk-cabbage or Jack-in-the-pulpit, which encloses the flowering organ ( spadix ). Any of various woodland wildflowers that appear above ground in early spring, flower and fruit, and die or return underground dormant, within a short two-month period. Quoted from Dictionary.com. 2017. Dictionary.com Unabridged. Random House, Inc. Regrowth of a felled tree by means of a sprout arising from its stump. Trees in a forest stand that are overtopped by yet taller trees (the canopy ). They may be younger specimens of canopy trees, or they may be naturally smaller or more shade-tolerant species. See also layer . (As used here) The layer (stratum) that lies beneath something. For instance, the soil in which a natural community grows. See also geologic substrate . [as in a forest that is ‘succeeding to’ another forest type] Give way to, become. The sequence of plant communities that develops in an area after large disturbance , from the initial stages of colonization until a long-standing mature natural community is achieved. Many factors, including climate and changes brought about by the colonizing organisms, influence the nature of a succession . Adapted from Dictionary of Biology. 2004. Fifth edition. Oxford University Press. Trees such as tulip poplar and pine that are typically the first to establish themselves on cleared or otherwise severely disturbed land. Sun-loving, fast-germinating species such as these form the canopy in the earliest successional communities in the D.C. area. A surficial geology map shows the distribution of all the loose (unconsolidated) materials such as cobbles , gravel , sand or clay which overlie solid bedrock in an area. The surficial deposits are the product of natural geologic processes such as glacier movement or water movement (such as river flood plains), or are attributed to human activity (such as road-fill or other land-modifying features), and may bear no relation to the bedrock beneath. Adapted in part from Maine Geological Survey. 2017. A type of acidic bedrock at Rock Creek Park, metamorphic in origin, similar to the Laurel Formation in appearance and origin, but containing a somewhat different suite of exotic inclusions, some rich in base elements. Widespread in the eastern Piedmont between northern VA and MD, and named for the village of Sykesville, MD. Restricted to the west side of the Rock Creek shear zone. (As used here) See ancient river terrace or floodplain terrace . The nearly flat part at the base of a hill slope. It receives deposits of sediment—fine or coarse—that get transported downslope by gravity or other means. Medium- to coarse-grained intrusive igneous rock containing certain essential minerals, including at least 20 percent free quartz ( acidic ). Other mineral content varies widely, resulting in rocks that weather to soils of varying but fairly good fertility for plants at Rock Creek Park. The contour and shape of the land. Quoted from Stewart, K.G. and M. Roberson. 2007. Exploring the Geology of the Carolinas. UNC Press, Chapel Hill, NC. The uppermost several inches to one foot of soil , enriched in organic matter and containing most of the soil’s microorganisms and biological activity that support plant growth. The process by which water, drawn up from the ground into a plant’s leaves, evaporates into the air through tiny openings in the leaves called stomata. Part of the water cycle . A stream or river which flows into a larger stream, river, or lake (and not directly into the ocean). Cloudiness in water, caused by suspended elements. Sediment composed of various proportions of cobbles , gravel , sand , silt , and clay that have never been buried sufficiently deep to be compressed and consolidated (‘cemented’) into bedrock . The plants growing under a forest’s canopy . Includes small trees, young canopy trees ( saplings and seedlings ), shrubs, and the herbaceous layer . Non- wetlands ; elevated land whose vegetation isn’t dependent on the water table . (As used here) Well-rounded, water-polished stones and sand that were likely deposited by a pre-historic river, but can now be found on some ridgetops in Rock Creek Park. See also ancient river terrace . Natural drainage basin; all the land drained by a river or stream and its tributaries . The breakdown of rock as it is exposed to weather conditions such as heat, water, ice, or pressure, or to chemicals or biological organisms. Land areas that are wet enough from surface water or groundwater to be saturated at least a good part of the year. Because of the saturated soils , the plants that grow there are different than those in areas of greater elevation. The uprooting and overthrowing of trees by the wind. Quoted from Merriam-Webster Online Dictionary. 2017. Merriam-Webster Online. Term for many different distantly-related animals—most of them invertebrates—which have a soft, long body that is round or flattened and typically lacks legs. Adapted from The American Heritage Science Dictionary. 2005. Houghton Mifflin Company, Boston.
2019-04-23T09:51:24Z
https://www.explorenaturalcommunities.org/glossary
Early life events can exert a powerful influence on both the pattern of brain architecture and behavioraldevelopment. The paper examines the nature of nervous system plasticity, the nature of functional connectivitiesin the nervous system, and the application of connectography to better understand the conceptof a functional neurology that can shed light on approaches to instruction in preschool and primary education.The paper also examines the genetic underpinnings of brain development such as synaptogenesis,plasticity, and critical periods as they relate to numerosity, language and perceptual development. Discussedis how the child's environment in school and home interact with and modify the structures andfunctions of the developing brain. The role of experience for the child is to both maintain and expandthe child's early wiring diagram necessary for effective cognitive as well as neurological developmentbeyond early childhood. Los primeros acontecimientos vitales pueden ejercer una enorme influencia tanto en el patrón de arquitecturacerebral como en el desarrollo del comportamiento. En este trabajo exploraremos la naturalezade la plasticidad del sistema nervioso, la naturaleza de sus conexiones funcionales y la aplicación de latractografía, para lograr una mejor explicación del concepto de neurología funcional que pueda arrojarluz sobre las teorías de la instrucción en la enseñanza preescolar y primaria. El trabajo analiza tambiénlos fundamentos genéticos del desarrollo del cerebro tales como la sinaptogénesis, la plasticidad y losperiodos críticos en lo que respecta a su relación con el desarrollo numérico, lingüístico y perceptivo.Se aborda cómo interactúa el entorno del ni˜no en la escuela y en casa con las estructuras y funcionesdel cerebro en desarrollo y las modifica. El papel de la experiencia temprana será tanto mantener comoexpandir los circuitos neurales necesarios para un desarrollo efectivo (tanto cognitivo como neurológico)más allá de la temprana infancia. From Camillo Golgi and Santiago Ramón y Cajal in the late 1890s, with their extensive observations, descriptions, and categorizations of neurons throughout the brain and the formation of the neuron doctrine and the start of modern Neuroscience, we have come a long way in understanding the nature of the nervous system in the control of human behavior. Little of that work has actually wound its way into the classroom and even less into public policy in education. The human brain develops from conception to the early twenties from the bottom up with vital and autonomic s and control coming first and cognitive-motor sensory and perceptual processes later and integration and decision making last ( Melillo & Leisman, 2009). The child's brain is influenced by the combined roles of genetics and experience ( Leisman, Machado, Melillo, & Mualem, 2012; Leisman & Melillo, 2012; Melillo & Leisman, 2009 ). The brain's capacity for change decreases with age ( Leisman, 2011). Cognitive, emotional, and social capacities are inextricably intertwined throughout the life course ( Leisman, Braun-Benjamin, & Melillo, 2014 ). Motor and cognitive s interact with our brains, being the direct result of bipedalism ( Melillo & Leisman, 2009). Toxic stress damages developing brain architecture, which can lead to life-long problems in learning, behavior, and physical and mental health. The child's environment directly affects synaptogenesis and allows for neurological optimization ( Gilchreist 2011; Leisman, Rodriguez-Rojas et al., 2014 ). Early life events can exert a powerful influence on both the pattern of brain architecture and behavioral development. Both early as well as later experiences contribute to the wiring diagram of the child's brain, but experiences during critical periods establish the basis for development beyond the early years. The role of the kindergarten and nursery teachers becomes critical in establishing the solid al footing of the developing child and the neurological adult. The foundations of brain architecture are established early in life through a continuous series of dynamic interactions between genetic influences, environmental conditions, and experiences ( Friederici, 2006; Majdan & Shatz, 2006 ). We have come to learn that the child's environment significantly impacts the timing and nature of gene expression directly affecting the child's brain architecture. Because specific experiences potentiate or inhibit neural connectivity at key developmental stages, these time points are referred to as critical periods ( Knudsen, 2004 ). Brain, cognitive, sensory, and perceptual development does not occur simultaneously but rather at different developmental stages as represented below in Fig. 1 . Each one of our perceptual, cognitive, and emotional capabilities is built upon the scaffolding provided by early life experiences. Examples can be found in both the visual and auditory systems, where the foundation for later cognitive architecture is laid down during sensitive periods for basic neural circuitry. Human Brain Development: Neurogenesis in the Hippocampus through Experience-dependent Synapse Formation. The capacity to perceive stereoscopic depth s early experience with binocular vision, ( Crawford, Pesch, & von Noorden, 1996 ), which at a later point in development may have implications for perceptual and cognitive development. Likewise, the capacity to perceive a range of tones s variation in the tonal environment, and exposure to such variation later leads to language processing and proficiency ( Kuhl, 2004; Newport, Bavelier & Neville, 2001; Weber-Fox & Neville, 2001 ). The absence of tones associated with a given language will eradicate the discrimination of those developmentally unheard tones by the time the infant is one-year-old ( Werker & Tees, 1983 ). Second language acquisition obtained early enough will have the same brain representation as the first language throughout the lifespan, but that second language, learned later in development, even when spoken at native level, will be represented differently in the brain relative to the first language (cf. Leisman, 2012; Leisman & Melillo, 2015 ). Although early experiences are reflected in behavior, behavioral measures tend to underestimate (in part because of a lack of sensitivity and specificity) the magnitude and persistence of the effects of early neuronal development ( Knudsen, 2004 ). In order to explore the role of timing and quality of early experiences on later cognitive , we must therefore have a genetic framework of the developing brain. We see no fundamental difference between the task of the educational system, rehabilitation after neurological insult or developmental disabilities, the task of parenting, the effects of social interaction, the effects on the nervous system of sport, or even the ability to intervene in the natural consequences of cognitive aging. The term education can be used interchangeably with rehabilitation as all directly relate to measurable dynamic plastic changes in neural connectivities. Education has been grabbing at straws for a long time. Often when a preliminary finding is reported in the neuroscience literature or presented at a conference, it is grabbed and expounded upon with little consideration of the fundamental nature of biological processes that underlie those changes. For better or worse, over the last 10 years, education has been actively and aggressively looking to the biological sciences in order to inform education policy and practice. A good example is that of the 1998 decision in Georgia to fund an expensive program, to provide CDs of Mozart's music to all new mothers. In establishing this policy, the governor of Georgia drew heavily on work in cognitive neuroscience conducted at the University of California, Irvine. The actions were taken in the hope of “harnessing the ‘Mozart effect’ for Georgia's newborns – that is, playing classical music to spur brain development.” Despite what the program implied, Mozart effect research, upon close examination, had little to offer education. One study, reported in Nature ( Rauscher, Shaw, & Ky, 1993 ), found that listening to Mozart raised the IQs of college students for a brief period of time. Another study found that keyboard music lessons boosted the spatial skills of three-year-olds ( Schlaug, Norton, Overy, & Winner, 2005 ). Cognitive neuroscientists responsible for this work, were baffled by Georgia's program and actions based on their work. Since this debacle, major figures in the sciences have published articles emphasizing caution and care as scientists, educators, and practitioners proceed down this exciting, but pitfall-laden road. These cautionary articles have laid the groundwork for relationships between neuroscience and education. However, there is a paucity of publications that systematically examine an area of research where conservative but confident claims can be made of the benefits of interdisciplinarity. Most currently prevailing patterns of education are heavily biased towards left cerebral ing and are antithetical to right cerebral ing. Reading, writing, and arithmetic are all logical linear processes, and for most of us are fed into the brain through our right hand. Most educational policies have tended to aggravate and prolong this one-sidedness. There is a kind of damping down of fantasy, imagination, clever guessing, and visualization in the interests of rote learning, reading, writing, and arithmetic. Great emphasis is placed upon being able to say what one has on one's mind clearly and precisely the first time. The atmosphere emphasizes intra-verbal skills, “Using words to talk about words that refer to still other words” ( Bruner, 1971). Educational institutions have placed a great premium on the verbal/numerical categories and have systematically eliminated those experiences that would assist young children's development of visualization, imagination and/or sensory/perceptual abilities. The over-analytic models so often presented to children in their textbooks emphasize linear thought processes and discourage intuitivity, analogical, and metaphorical thinking. These factors of neural ing among children have been left to modification by random environmental, rather than systematic, institutional means. Education, which is predominantly abstract, verbal and bookish, does not have room for raw, concrete, esthetic experience, especially of the subjective happenings inside oneself. Education imposes a structure of didactic instruction, right-wrong criteria, and dominance of the logical-objective over the intuitive-subjective on the learning child so early in the course of emergent awareness of his world and of himself that, except in rare cases, creative potential is inhibited, or at least diminished (cf. Melillo & Leisman, 2009 ). This leads us to affirm that our system of education is one, which leads to the underdevelopment of the right hemisphere. As a result of excessive emphasis on intellectualizing, verbalizing, analyzing, and conceptualizing processes, ‘curriculum’ has become equated with mere ‘understanding’. This imposes a ‘neurotogenic limitation’ and binds mental processes so tightly that they impede the perception of new data. In the words of Gazzaniga (1975) a long time ago, curriculum is “inordinately skewed to reward only one part of the human brain leaving half an individual's potential unschooled.” The traditional preoccupation with formal intellectual education effectively blocks the possibility for the students to recognize and cultivate creativity and transcendence. It has been the adaptation by educators of applications of brain sciences into the classroom and the culture of dichotomies of the Behavioral Sciences over the past 150 years that have placed undo reliance by our educational systems on al brain models that may be irrelevant at best and damaging at worst to children's classroom performance and its evaluation. What emerges as the central proposition of this paper is that (A) the examination and study of regional cerebral differences in brain as a way of explaining and evaluating the learning process within the educational system is irrelevant (cf. Figs. 6 A & B); (B) the evaluation of students by standardized aptitude and achievement tests is not sufficient although probably still necessary; and (C) the educational systems had better examine student performance and teach towards “cognitive efficiency” rather than simply mastery vs. non-mastery with methods that employ both psychophysics that examine person-environment interaction and mathematical means of examining optimization and the strategy used to get there as well as how far or close a student is ing from a mathematically derived optimization regression line or, in fact, how quickly the learner is progressing in that direction. Educators, although perhaps not palatable to conceive of early childhood education as such, are producing a product and production management techniques that should be useful for evaluating not just the product but the process or “manufacture” of that product as well. The uniquely large number of cells and their potential for association as well as asymmetry is directly the result of bipedalism along with genetic mutations (cf. Melillo & Leisman, 2009 ). Once the large cell assemblies were established and pressures for bi-symmetry were released, humans then could develop asymmetric s in their brains that were not directly tied to motor or autonomic control. Hemispheric specialization then could develop different control centers consistent with the previous of that hemisphere, creating most of the unique human characteristics. The other demand that bipedalism would place on the brain would be the need to be more precise and complex in the synchronization of muscles to be able to walk, run, and jump. This increased synchronization would greater frequency of oscillation of control centers within the inferior olive and cerebellum and their feedback to the intralaminar nucleus of the thalamus and its reciprocal thalamo-cortical projections. This increase in oscillation into the 40-Hertz range is thought to be d to achieve binding within various cortical sites into one continuous conscious percept of the world. This appears to be the foundation of human consciousness, which is thought to be unique in humans and due to unique connectivities in the human brain. Therefore, a proposal of an increase in neuroblast proliferation in the human brain is consistent with the concept of neoteny in the human evolution. This concept states that certain characters are delayed in their development with respect to others (pedeogenesis) (cf. Fig. 1) (Bjorklund, 1997 ). This resulted in changes in adult morphology during evolution. This is thought to be the process in the human skull in which infantile dimensions are comparable to other primates. This first factor explains the increase in cell size that concurs with minimal genetic change. However, the maintenance of these cells would not continue without the appropriate activity, presynaptically and postsynaptically. In essence, they a power source as well as they would, in turn, connections to expanded areas sub-cortically. Bipedalism would provide both by increasing exponentially the amount of temporal and spatial summation within sensory motor networks, especially cerebellum, thalamus, and cortex. This would expanded areas of cerebellum and thalamus that would evolve in parallel with the expanded areas of cortex and could provide a site for connection to these increased numbers of neurons. This would take place because although the genetic change would increase cell number, it would do so with a non-directional force, which would not specify any specific shape. Posterior epigenetic reorganization (synaptic stabilization) would determine the shape and configuration of the networks within the brain itself. Therefore, genetic factors would produce the density of cells d but environmental factors would trim and shape it in a specific fashion. Plasticity is the ability of the brain to grow and whether it is growing on a short-term basis or on a long-term basis in the case of evolution, the facts of plasticity are consistent. This can only mean that there was some increase in the frequency, duration, and intensity of stimulation of the human brain over time for it to have evolved as uniquely as it has. There are two things that make humans unique among other organisms: 1) we have a larger cortex and, 2) we stand upright (bipedal). Refinements in the neural circuits that mediate sensory, emotional, and social behaviors are driven by experience ( Feldman & Knudsen, 1998; Leisman et al., 2012 ). Specifically, postnatal experiences drive a protracted process of maturation at the structural and al level, but the very ability of such developmental processes to occur successfully is dependent in large part on the prenatal establishment of the fundamental brain architecture that provides the basis for receiving, interpreting, and acting on information from the world around us ( Hammock, 2006). While the term “blueprint” has been utilized in the past to describe a fixed set of genes with inflexible interactions, the term is used here as an analogy to a rough draft, or design – the framework from which a more defined structure will evolve, alternatively, an operating system in which programs have yet to be laid down. The emergence of the architecture in all vertebrate species s early; in humans, this occurs within the first two months post-fertilization ( Levitt, 2003). The cerebral cortex has garnered substantial attention in defining key developmental features across species. This is due in part to the technical advantages of studying a well-organized, layered structure, and the al relevance of linking typical and atypical maturation of complex behaviors and neurodevelopment. The neocortex in all mammalian species is comprised of six layers of neurons, the architecture, connectivity and chemistry of which are distinct depending upon their location. The neocortex is organized to receive information from the organism's surrounding environment, typically through connections with the thalamus. It does so by integrating information within and across architecturally distinct al domains, and then relays this information to other brain centers that generate an appropriate al response. There are two major organizing principles of the neocortex influenced by gradients of gene networks that have developed evolutionarily. First, the precursors of different al areas emerge during roughly the first and second trimester of pregnancy in the human (cf. Leary, Chou, & Sahara, 2007 ). Regional specification is not absolute, but involves networks controlling the expression of axon guidance molecules that control the initial input and output wiring plan. Expansion of the size of the neocortex during evolution (e.g., 1000-fold between mouse and human) occurs mostly in this period ( Rakic, 2005). The ‘inside-out’ pattern of neuron production and migration provides the basis for building cell connectivities forming al areas, with small variations in the ratio of excitatory to inhibitory neurons in different regions. In fact, this organization provides a framework for later-developing refinement of circuits influenced extensively by patterns of physiological activity through experience and training. Experiments in genetically manipulated mice demonstrate that altering the expression of just one genetic transcription factor, cortical regions can be changed ( Cholfin & Rubenstein, 2007 ). For example, the genetic factor emx2 controls the expression of the Fgf8 factor near the anterior end of the cerebrum. Fgf8 alone is sufficient to specify the cortical regions that will eventually receive connections that are typical of frontal and somatosensory cortices ( Fukuchi-Shimogori & Grove, 2003 ). This type of early genetic re-specification is ally relevant. For example, the Fgf17 is responsible for initial patterning of different frontal cortex areas ( Cholfin & Rubenstein, 2007). It is not our here to pursue this notion in detail other than to indicate that the early specification and re-specification of the neocortex by genetic factors is powerful because additional axon guidance molecules serve as important chemical cues for getting axons to grow into their correct target region prior to ning the extended process of synapse formation (cf. Alcamo et al., 2008 ). Gene regulatory networks also can influence the initial size of cortical areas by modulating the number of neurons produced. The long-distance circuit projections that help to define al cortical areas, and even al differences in superficial and deep projecting neurons, are altered when the disruption of early gene networks modifies guidance cues so that atypical connections are made. Though we tend to think that genetic mechanisms are immutable, it is important to stress that expression of early gene networks can be perturbed not only by catastrophic genetic mutations that disrupt important regulatory genes, but also by prenatal environmental influences, such as drugs, alcohol, toxins, and inflammatory responses. These may have less profound impacts on brain patterning, but nonetheless can result in long-term disruption of cellular differentiation and behavioral development ( Stanwood & Levitt, 2008). In all mammalian species, this early period of specified patterning to generate a unique architecture is followed by an extended period of synapse formation, adjustment, and pruning that typically extends from the last quarter of gestation through puberty ( Bourgeois, Goldman-Rakic, & Rakic, 1999 ). Although genetics provides an important foundation for early development, it is only a framework upon which the early childhood environment can influence future structure and . This can best be illustrated through studies of the sensory systems, which demonstrate the crucial role of environment in the early development and maintenance of the nervous system ( Leisman, 2011 ). Such work also demonstrates the need for patterned physiologic activity during development, as well as refinement and maintenance of detailed sensory maps. Synaptic reorganization takes place most predominantly during childhood and adolescence ( Blakemore, 2012 ). During these periods the brain becomes sensitive to change which allows it to develop in unique ways dependent upon the individual age, gender, and environment along with many other variables ( Andersen, 2003 ). The concept of “self-organization” indicates that the brain actually organizes itself based on the individual's experiences. Environment stimulation and training can affect how the brain develops and at what pace ( Andersen, 2003; Leisman, 2011 ). The environment can factors like location and surroundings, home, parenting, and of course the classroom, as well as circumstances in each of those environments ( Blakemore, 2012; Tau & Peterson, 2010 ). Environment can also be identified as a child's emotions or responses to certain stimuli, in this case, the concept of self-organization which postulates that the brain organizes itself based on each child's unique experiences. The fact that humans have a greater capacity than rats or even chimps for self-organizing, plastic, or flexible behavior provides no implication that we are either all stereotyped or flexible in our behavior and brain organization. Stereotypy s for efficiencies but plasticity or flexibility allow for adaptation due to the exigencies of one's environment. We, given the notions of stability and flexibility ( Leisman, 1980 ), have a basis for rehabilitation and effective adaptive . The concept of the interplay between stability and flexibility and its implications for the education of the normally developing child's brain needs to be viewed as a relativistic notion, viewed against the features of the organism that are not plastic. In order to identify flexibility or plasticity, one must be able to identify the invariant and constant. The identification of plasticity s us to be able to know the constraints of the system. The fact, however, that we are more plastic than other organisms is expressed even in our adult lives as organisms. This suggests that our capacity for systematic change and the fact that we retain flexibility across our later developmental periods allows application of rehabilitation thinking and the measurement of optimization throughout the life span. Hebb had postulated in 1949 that when one cell excites another repeatedly, a change takes place in one or both cells such that one cell becomes more efficient at firing the other ( Hebb, 1949 ). It is this view that is not only limited to a particular cell and its arborized neuronal connections but to definable anatomical regions. It is this notion that forms the basis of our concept of plasticity. Hebb was the first to propose the ‘enriched environment’ as an experimental concept. He reported anecdotally that laboratory rats that nurtured at home as pets were behaviorally different than their littermates kept at the laboratory. Hebb was not the only one who conceptualized the effects of enriched nurturance having an effect on nervous system structure and . Hubel and Wiesel examined the effects of ive visual deprivation during development on the anatomy and physiology of the visual cortex ( Hubel & Wiesel, 1970; Wiesel & Hubel, 1965 ) and Rosenzweig and colleagues ( Rosenzweig, 1966; Rosenzweig & Bennett, 1996; Rosenzweig et al., 1978 ) introduced enriched environments as a testable scientific concept by measuring the effects of environment on ‘total brain weight,’ ‘total DNA or RNA content,’ or ‘total brain protein’. Numerous researchers have demonstrated a significant linkage between enrichment and neurological plasticity that have d biochemical changes, gliogenesis, neurogenesis, dendritic arborization, and improved learning and memory ( Greenough, West, & DeVoogd, 1978; Kempermann, Kuhn, & Gage, 1997 ). An example is provided below in Figure 2. Dendritic morphology of pyramidal neurons in layer III of the somatosensory cortex in rat housed in (left) standard and (right) enriched environments. Bar = 25 ?m. The enrichment significantly increases dendritic branching as well as the number of dendritic spines (cf. Johansson & Belichenko, 2001). In an experimental setting, an enriched environment is ‘enriched’ in relation to standard laboratory housing conditions in that experimental animals in larger cages than their non-enriched peers have greater opportunity at social interaction with nesting material, toys and food locations frequently changed. The enriched animals were also given opportunities for voluntary activity on treadmills. These experiences have allowed researchers to formulate a definition of enrichment as “a combination of complex inanimate and social stimulation” ( Rosenzweig et al., 1978). In the landmark studies of vision by Wiesel and Hubel (1965) , it was demonstrated that kittens reared with normal visual experience resulted in each eye having sole access to alternating columns of neurons in layer IV of the striate cortex. At birth, however, both eyes synapsed on all neurons in layer IV. In order to assure that a neuron is stimulated by experience coming from only one eye, a competitive process occurs in which activation and neighboring inhibition result in an alternating pattern of connectivity between columns of neurons in layer IV and each eye ( Wiesel & Hubel, 1965). When kittens were reared with one eye closed for a period of time after birth, the occluded eye became essentially ally blind. This blindness is due to the elimination of connections of the closed eye to layer IV and the lack of exposure to patterned activity. If occlusion extends beyond a certain time period, the typical pattern of ocular representation cannot be recovered despite the restoration of visual input to both eyes ( Wiesel & Hubel, 1965 ). It has been hypothesized that the initial ingrowth of axons from the thalamus to ocular dominance columns in visual cortex is governed by molecular cues ( Crair, Horton, Antonini, & Stryker, 2001; Crowley & Katz, 2000 ). It has recently been shown, for example, that the decreased visual acuity seen in the adult rat suffering from chronic monocular deprivation is reversed if the adult rat is treated with dark exposure prior to removal of the occlusion ( He, Ray, Dennis, & Quinlan, 2007 ). The increased plasticity induced by the dark environment may be due to a lack of input to visual cortex through the ing eye, and therefore a reduction in the strength of previously established connections. A similar restoration of visual acuity can also be induced with chronic administration of fluoxetine ( Maya Vetencourt et al., 2008 ). Such dramatic changes in sensory system connectivity suggest that activity-dependent potentiation of these initial axons is d to maintain connections among cortical regions. In the case of primary visual cortex, local circuit neurons have been implicated in activity-dependent plasticity through GABAergic inhibition over a wide range of neighboring axonal paths ( Fagiolini et al., 2004; Hensch & Stryker, 2004 ). An altered pattern of activity through one circuit can thus radically change neighboring circuits through an increase or decrease in inhibition of mediating cells. The early development of visual pathways may be likened to the laying of a foundation and scaffolding for a building. If the scaffolding pattern is changed, the building may not be constructed in its original form, though a al alternative may be reached. Thus, irreversible changes at the synaptic level do not necessarily translate into irreversible changes in a complex behavior ( Feldman & Knudsen, 1998 ). For example, we now understand that the sensitive period for visual representation reflects, predominantly, the critical period for thalamic input to layer IV ( Pascual-Leone, Amedi, Fregni, & Merabet, 2005 ), but that plasticity of other sensory systems may allow a blind person to demonstrate normal – and possibly enhanced – spatial awareness ( Amedi et al., 2007 ). Plasticity in higher regions involved in spatial awareness feeds back upon lower pathways, thus compensating for an abnormal visual representation. Advanced perceptual processes are also dependent upon the normal development of basic visual systems. For example, early visual deprivation due to congenital cataracts can lead to subtle but persistent deficits in face processing, even when the cataracts are removed in the first months of life ( LeGrand, Mondloch, Maurer, & Brent, 2001 ). Similarly, experience with specific faces, such as same vs. different species, powerfully shapes subsequent face specialization. For example, monkeys deprived of viewing faces since birth are capable of discriminating both monkey and human faces following the ive restoration of faces in the visual environment, but what kind of faces determines whether that same monkey will be able to subsequently discriminate human or monkey faces – thus, monkeys ively exposed to human faces can only discriminate human faces not monkey faces, and monkeys ively exposed to monkey faces can only discriminate monkey faces, not human faces ( Sugita, 2008). Critical periods are important stages in the lifespan of the child as he or she acquires a particular developmental skill that is indispensable and which can influence later development. If the child does not receive appropriate stimulation during a given critical period to learn a given skill or trait, it may be difficult, ultimately less successful, or even impossible, to develop some s later in life. This is fundamentally different from the sensitive period, which is a more extended period of time during development when the child or adolescent is more receptive to specific types of environmental stimuli, usually because the nervous system development is especially sensitive to certain sensory stimuli at that given time. For example, the critical period for the development of a human child's binocular vision is thought to be between three and eight months, with sensitivity to damage extending up to at least three years of age. Further critical periods have been identified for the development of hearing and the vestibular system ( Melillo & Leisman, 2009; Robson, 2002 ). Confirming the existence of a critical period for a particular ability s evidence that there is a point after which the associated behavior is no longer correlated with age and ability stays at the same level. Sensitive periods of the child's cognitive development associated with the development of his or her nervous system is represented in Figs. 3 below. Here we can see the learning sensitivity for numerous cognitive as well as social skills. Sensitive periods during early brain development for (A) Sensation, emotional control, numerosity and symbolic representation and (B) Speed of processing, working memory, long-term memory and vocabulary. Hubel and Wiesel's experiments involving visual deprivation brought about the concept of “sensitive” and “critical” periods in early cognitive development. “Sensitive” periods are defined as a time in development during which the brain is particularly responsive to experiences in the form of patterns of activity ( Daw, 1997 ). Further, this time point may be termed a “critical” period if the presence or absence of an experience results in irreversible change ( Newport et al., 2001; Trachtenberg & Stryker, 2001 ). Those factors that allow a circuit underlying cognition to be plastic – or render it unchangeable – are not yet well understood. In the area of speech and language, the “maturational hypothesis,” predicts that native language proficiency cannot be obtained when learning s after puberty ( Werker & Tees, 2005 ). Studies supporting this theory have correlated the degree of accent in a second language to age at the time of acquisition of that language ( Birdsong & Molis, 2001). Adults exposed to a second language in early childhood were found to have native-like accents and pattern of tone ( Gordon, 2000; Stein et al., 2006 ). Other researchers have also found a negative correlation between age at acquisition and grammaticality judgments ( Komarova & Nowak, 2001). However, as seen in Figs. 4 below, brain areas representing early bilingual language acquisition overlap as compared to late bilingual language acquisition. (A) and (B) represent the effect of brain on early as opposed to late exposure to a second language. The figures clearly indicate the nature of the optimization and efficiency of brain connections when notions that related to early training and critical periods are applied. Several investigators have used the theory of neural networks, originally developed for vision research, to model the activity of individual neurons and/or groups of neurons in the brain during learning ( Morton & Munakata, 2005 ). These neural network models are particularly useful for comparing the experience-independent and experience-based accounts of sensitive periods, because the network can be kept constant with regard to features affected by maturation, motivation, and amount of exposure. Returning to the work of Hubel and Wiesel, it is important to note that the loss of binocular in the kitten did not arise simply because of the absence of input to the occluded eye. Occluding both eyes during the same time period of development was proven not to result in loss of binocular vision ( Cynader & Mitchell, 1980 ). It is necessary for one eye to have access to layer IV of the visual cortex while the other eye is denied access, allowing exclusive connectivity of the unoccluded eye to striate cortex. The irreversible loss of binocular vision during development must therefore be due to a combination of environmental experience and cortical learning processes ( Knudsen, 2004). The fact that the existence of a sensitive period can depend upon occurrence of a particular environment suggests that in early development, portions of networks become perceptually biased, making future modifications more difficult. For example, in the literature on both speech and face perception, the perceptual window through which faces and speech is initially processed is broadly tuned, then narrows with experience. For example, Pascalis et al. (2005) demonstrated that six- and nine-month-old infants and adults can readily discriminate two human faces, but only 6-month-old infants can discriminate two monkey faces. Similarly, six month olds given three months of experience viewing monkey faces can readily discriminate monkey faces at nine months of age, whereas nine-month-old infants not afforded such experience cannot ( Pascalis et al., 2005). As a rule, circuits that process lower level information mature earlier than those that process higher level information ( Scherf, Behrmann, Humphreys, & Luna, 2007 ). For example, in the neural hierarchy that s visual information, low-level circuits that the color, shape, or motion of stimuli are fully mature long before the high-level circuits that or identify biologically important stimuli, such as faces, food, or frequently used objects ( Knudsen, 2004; Scherf et al., 2007 ). The process by which initial learning leads to a constraint on later learning is termed entrenchment, and is equally apparent in the development of speech ( Munakata & Pfaffly, 2004; Seidenberg & Zevin, 2006 ). Several studies have shown, for example, that adults are often better at discriminating non-native phonetic contrasts when they differ substantially from phonemes of their native language ( Kuhl, 2004). Adults are poorer at discriminating when the phonetic contrasts are similar to phonetic contrasts of their native language. This is akin to the nature of the developing auditory system, which is more capable of discriminating tones outside of the tonal environment of hearing. At both the level of tone, and of speech phonetic discrimination, there is evidence for a fixed bias of the neural network. As discussed in the case of visual networks, however, neurons may be constantly modifying connectivity, allowing learning from new environments to compete against already existing tendencies. The role of environment and inputs to the brain may therefore be seen as critical in the bias of network formation during early life. Altered patterns of enhancement and inactivity are thought to be the basis for neural plasticity and have been suggested in humans by studies of tactile and auditory perception in the blind, where such systems may even activate “visual” cortex ( Merabet, Rizzo, Amedi, Somers, & Pascual-Leone, 2005 ). It is likely that changes in experience have a greater impact on an untrained ‘young’ network as compared to the same experience on an ‘older’ trained network. This biasing feature is suggested by studies on aphasia that that words learned earlier in life are more resistant to loss, and are more easily accessed in naming tasks as compared to words learned later ( Greenough, Black, & Wallace, 1987 ). It has been suggested that learning through experience leads to the capacity to understand specific environments and the responses needed for these environments ( Anisman, Zaharia, Meaney, & Merali, 1998 ). Similarly, changes in the environment – particularly when they are dramatic and pervasive – may have the power to alter neural connectivity and cognitive processing between systems. Examples can be found in studies of sensory deprivation, such as blindfolding, as well as sensory enhancement through technology. In studies of deaf children receiving cochlear implants, it is clear that language learning improves with earlier correction ( Tomblin, Barker, Spencer, Zhang, & Gantz, 2005 ). It remains to be determined, however, whether this effect upon learning is due to actual changes in cognitive capacity or changes in the learning environment brought about by the ability to interact with others through spoken language. The nature of the child's experiences, particularly during a time-limited period in early development, can profoundly affect the mental framework we use to understand the world around us. Sensitive periods in child development are of interest because they represent a timeframe in which our capabilities can be modified and perhaps enhanced. The quality of experiences during such periods – be they adverse or enhancing – is also of importance in understanding why it may be difficult to normal once development has been altered. While explanatory models for the timing of early experiences have generally been based at the genetic or neural circuit level, our direct observations of the effects of early environments are often made at the behavioral level. Through the study of sensitive periods, we are better able to understand the impact that early experience may have upon development. To cite but one example, it has recently been demonstrated that otherwise-typically developing young children institutionalized at birth have IQs in the low 70s. However, placing such children in high quality foster care before the age of two years leads to a dramatic increase in IQ ( Nelson et al., 2007). A similar trend also occurs for language ( Windsor, Glaze, & Koga, 2007 ) and the development of the EEG ( Marshall, Reeb, Fox, Nelson, & Zeanah, 2008 ), although in the case of the former, the sensitive period occurs around 16-18 months. It is important to note recent work suggesting that the brain retains the capacity to adapt and change throughout the lifespan ( Keuroghlian & Knudsen, 2007 ). However, the foundation of brain architecture must lie in the early developmental years, and that the influence of childhood environment is much more salient in such basic cognitive processes as sensory perception ( Amedi et al., 2007; Knudsen, 2004; Pascual-Leone et al., 2005 ). Each sensory and cognitive system reaches a unique sensitive period ( Daw, 1997 ), and thus identical environmental conditions will result in very different cognitive and emotional experiences for a child, depending upon his or her age ( Amedi et al., 2007; Trachtenberg & Stryker, 2001; Tritsch, Yi, Gale, Glowatski, & Bergles, 2007 ). Behavioral analysis can demonstrate the value of early experiences in the development of the brain. It must be remembered, however, that information is processed in a series of networks, each reflecting the effects of environment at varying time points. Higher level processing may mask modifications in lower levels networks ( Daw, 1997; Feldman & Knudsen, 1998; Trachtenberg & Stryker, 2001 ). Thus, behavioral outcomes may be shaped by later experience, even though circuits at the lowest levels in a pathway remain irreversibly altered. In addition, studies of the plasticity of sensory processing reveal that similar information can be derived from alternative pathways ( Akins, 2006; Pascual-Leone et al., 2005; Melchner, Pallas, & Sur, 2000 ). For example, when using sound devices to assess space, blind individuals have been shown to activate lateral occipital cortex in the same manner as sighted individuals do through vision ( Amedi et al., 2007). It has been suggested that loss of sensory input – such as occurs in late blindness – may in fact lead to the unmasking and strengthening of alternative pathways stemming from multisensory integration regions of the brain ( Pascual-Leone et al., 2005 ). These pathways may not only substitute for the original sensory inputs, but may enhance previously existing capabilities. This form of sensory enhancement can often be seen in the highly tuned auditory and tactile perception of blind. High-level neural circuits that carry out sophisticated mental s depend on the quality of the information that is provided to them by lower level circuits. Low-level circuits whose architecture was shaped by healthy experiences early in life provide high-level circuits with precise, high-quality information. High-quality information, combined with sophisticated experience later in life, allows the architecture of circuits involved in higher s to take full advantage of their genetic potential. Thus, early learning lays the foundation for later learning and is essential (though not sufficient) for the development of optimized brain architecture. Stated simply, rich early experience must be followed by rich and more sophisticated experience later in life, when high-level circuits are maturing, in order for full potential to be achieved ( DeBello & Knudsen, 2004; Karmarkar & Dan, 2006; Sabatini et al., 2007 ). Elevated cerebral glucose metabolism can be observed during ages 3-10 yrs., which, corresponds to an era of exuberant connectivity that is needed for energy needs of neuronal processes. In childhood it is measurably greater by a factor of 2 compared to adults. PET scans the relative glucose metabolic rate. We see the complexity of dendritic structures of cortical neurons consistent with the expansion of synaptic connectivities and increases in capillary density in the frontal cortex. During early childhood cross-modal plasticity is more evident ( Bavelier & Neville, 2002) with, as seen in Figure 4 , exuberant connectivities between auditory and visual areas that will gradually decrease in most children between 6 and 36 months of age ( Neville & Bavelier, 2002). PET and fMRI studies have shown that elderly people are more less “optimized,” activating greater regions of the brain than younger individuals for a variety of motor tasks including simple one. Accuracy is not affected, but the results of greater areas of brain involved in motor tasks among the elderly is highly associated with increases in reaction time, with greater surface area activation, and with the recruitment of additional cortical and subcortical regions as compared to that found in younger individuals ( Ward & Frackowiak, 2003). Knowing what we do about the neuroscience of plasticity and development under normal and enriched environments, we can understand that much of the knowledge base is predicated on a fair amount of research in lower organisms. This is not to say that there is no validity in the ability to extrapolate to normal child development. We know that brain networks, not necessarily structure, support cognitive in the examples provided in the previous section. Classroom-based educational practice is supported by the knowledge base of Cognitive Psychology and it has been applied in the classroom to the analysis of reading by studying the component skills of word recognition, grammar and syntax text analysis, and metacognition. Additionally, the encoding of visual and auditory information from the printed words has been extensively examined as well as lexical access, which can determine if the visual representation matches a word in the reader's language. The tools of Cognitive Psychology have allowed the educator to understand better the component processes, skills, and knowledge structures underlying reading, mathematics, writing, and science ( Bruer, 1993; Leisman, Machado, & Mualem, 2013; Skemp, 1987 ). The result of Cognitive Psychology's presence in the classroom has directly led to numerous instructional tools and technologies and this is not the forum in which to enumerate those advances ( Carver & Klahr, 2013). The missing piece is in the application of the Cognitive Neurosciences and engineering methods and methodology in the classroom. While applications in brain imaging have revealed much about the nature of thinking, problem solving, reading, sensory and perceptual processes and understanding, because of the lack of temporal resolution in these technologies, it is difficult to apply the findings in practical ways in classroom performance and in its evaluation. Stanislas Dehaene and Jean-Pierre Chageux ( Dehaene & Changeux, 1993 ) developed a neuronal network model of number processing, which made the prediction that the parietal cortex contain “numerosity detectors” (cf. Fig. 5 ) These detectors are neurons tuned to a specific number, and thus firing preferentially for instance to sets of 3 objects. While it is very easy to fall into the trap of phrenology, with specific brain sites controlling specific , Dehaene and colleagues have actually strongly argued for network approaches to understand brain and cognition. And it is those networks that need development inside the framework of formal education. An example of that type of network may be seen in Figure 6. Neuronal Modeling for Numerosity (Dehaene & Changeux, 1993). Schematic Functional and Anatomical Architecture of the Triple-code Model ( Dehaene & Cohen, 1995 ). The localization of the main areas thought to be involved in the three numerical codes is depicted on a lateral view of the left and right hemispheres. The arrows indicate a al transmission of information across numerical codes and are not meant as a realistic depiction of existing neural fiber pathways, whose organization is not fully understood in humans. Dehaene argues that multiple brain areas contribute to the cerebral processing of numbers; the inferior parietal quantity representation is only one node in a distributed circuit. The triple-code model of number processing ( Dehaene & Cohen, 1995 ) makes explicit hypotheses about where these areas lie, what they encode, and how their activity is coordinated in different tasks ( Fig. 6 ). Functionally, the model rests on three fundamental hypotheses. First, numerical information can be manipulated mentally in three formats: an analogical representation of quantities, in which numbers are represented as distributions of activation on the mental number line; a verbal format, in which numbers are represented as strings of words (e.g., thirty-seven); and a visual Arabic representation, in which numbers are represented as a string of digits (e.g. 37). Second, transcoding procedures enable information to be translated directly from one code to the other. Third, each calculation procedure rests on a fixed set of input and output codes. For instance, the operation of number comparison takes as input numbers coded as quantities on the number line. Likewise, the model postulates that multiplication are memorized as verbal associations between numbers represented as string of words, and that multi-digit operations are performed mentally using the visual Arabic code. Dehaene (Dehaene, 1996 ) designed an experiment to test a serial model of numerical comparison. In his experiment, right-handed college students had to decide if a number flashed on a computer screen was larger or smaller than five, then press a key to indicate their response. Dehaene manipulated three independent factors, where each factor was assumed to influence processing within only one of the model's stages. For the stimulus identification stage, he contrasted subjects’ performance when given Arabic (1, 4, 6, 9) versus verbal notation (one, four, six, nine). For the magnitude comparison stage, he compared subjects’ performance on close (4, 6 and four, six) versus far (1, 9 and one, nine) comparisons to the standard 5. His reason for choosing this factor is the well-established distance effect ( Moyer & Landauer, 1967). The distance effects showed that it takes subjects longer, and they make more errors, when asked to compare numbers that are close in numerical value than when asked to compare numbers that are farther apart in numerical value. In Dehaene's experiment, half the comparisons were close comparisons and half were far comparisons, a factor that should affect only the magnitude comparison stage. Finally, on half the trials, subjects had to respond “larger” using their right hand and “smaller” using their left hand, and on half the trials, “larger” with their left and “smaller” with their right. This factor should influence reaction times only for the motor preparation and execution stage. When Dehaene d subjects’ reaction times on the numerical comparison task, he found that the overall median (correct) reaction time was around 400 milliseconds. Subjects needed less than one half second to decide if a number was greater or less than 5. Furthermore, he found that each of the three factors had an independent influence on reaction time. Reactions to Arabic stimuli were 38 milliseconds faster than those for verbal notation, far comparisons were 18 milliseconds faster than close comparisons, and right-hand responses were 10 milliseconds faster than left-hand responses. Finally, the three factors had an additive effect on subjects’ total reaction times, just as one would expect if subjects were using the serial-processing model. Dehaene's experiment, however, went beyond the typical cognitive experiment that would have stopped with the analysis of reaction times. Dehaene also recorded event-related potentials (ERPs), while his subjects performed the number comparison task. His ERP system measured electrical currents emerging from the scalp at 64 sites, currents that presumably were generated by the electrical activity of large numbers of nearby neurons. ERPs have relatively poor spatial resolution, but relatively precise temporal resolution. Significant changes in the electrical activity recorded at each of the 64 sites as subjects compared numbers might give general indications about where the neural structures were in the brain that implemented the three processing stages. The ERPs’ more precise temporal resolution might indicate the time course of the three processing stages. Together, the spatial and temporal data would allow Dehaene to trace, at least approximately, the neural circuitry that is active in numerical comparison. A cognitive model together with brain recording techniques d the possibility of mapping sequences of elementary cognitive operations onto their underlying neural structures and circuits. This first significant ERP effect Dehaene observed occurred 100 milliseconds after the subjects saw either the Arabic or verbal stimulus. This change in brain activity was not influenced by any of the experimental factors. It appeared to occur in the right posterior portion of the brain. Based on this and other imaging and recording experiments, early activation in that part of the brain is most likely the result of the brain's initial, nonspecific processing of visual stimuli. At approximately 146 milliseconds after stimulus presentation, Dehaene observed a notation effect. When subjects processed number words, they showed a significant negative electrical wave on the electrodes that recorded from the left posterior occipital-temporal brain areas. In contrast, when subjects processed Arabic numerals, they showed a similar negative electrical wave on electrodes recording from both the left and right posterior occipital-temporal areas. This suggested that number words are processed primarily on the left side of the brain, but that Arabic numerals are processed on both the left and right sides. To look for a distance effect and the timing and localization of the magnitude comparison stage, Dehaene compared the ERPs for digits close to 5 (4, four and 6, six) with the ERPs for digits far from 5 (1, one and 9, nine). This comparison revealed a parieto-occipto-temporal activation in the right hemisphere that was associated with the distance effect. This effect was greatest approximately 210 milliseconds before the subjects gave their responses. What is significant here, according to Dehaene, is that the timing and distribution of the electrical currents was similar for both Arabic digits and verbal numerals. This supports the claim, Dehaene argues, that there is a common, abstract, notation independent magnitude representation in the brain that we use for numerical comparison. To make a numerical comparison, we apparently translate both number words and Arabic digits into this abstract magnitude representation. Finally, Dehaene found a response-side effect that occurred approximately 332 milliseconds after the stimulus or equivalently 140 milliseconds before the key press. This appeared as a substantial negative wave over motor areas in the brain. The motor area in the left hemisphere controls movement of the right side of the body, and the motor area in the right hemisphere controls movement of the left side of the body. Thus, as expected, this negative wave appeared over the left hemisphere for right-hand responses and over the right hemisphere for left-hand responses. Dehaene's experiment exemplifies how cognitive neuroscientists use cognitive theories and models in brain imaging and recording experiments. Well-designed, interpretable imaging and recording studies demand analyses of cognitive tasks, construction of cognitive models, and use of behavioral data, like reaction times, to validate the models. Experiments like these suggest how neural structures implement cognitive s, tell us new things about brain organization, and suggest new hypotheses for further experiments. Dehaene's experiment traces the approximate circuitry the brain uses to identify, compare, and respond to number stimuli. The experiment reveals several new things about brain organization that suggest hypotheses for further experiments. First, the experiment points to a bilateral neural system for identifying Arabic digits. This is something that one could not discover by analyzing behavioral data from normal subjects. Nor is it a finding found as a result of the neuropsychological study of patients with brain lesions and injuries that could reliably and unambiguously be supported. In fact, Dehaene suggests, the existence of such a bilateral system could explain some of the puzzling features about the patterns of lost versus retained number skills neuropsychologists see in these patients. Second, this experiment suggests there is a brain area in the right hemisphere that is used in numerical comparison. This area might be the site of an abstract representation of numerical magnitude, a representation that is independent of our verbal number names and written number symbols. This, too, runs counter to common neuropsychological wisdom. Neuropsychologists commonly hold that the left parieto-occipito-temporal junction, not the right, is the critical site for number processing because damage to this area in the left hemisphere causes acalculia. Dehaene's finding of right hemisphere involvement during the comparison phase suggests that neuropsychologists should look more carefully than they might have in the past at numerical reasoning impairments among patients who have suffered damage to the right posterior brain areas. They might find, for example, patients who are able to read Arabic numerals and perform rote arithmetic calculations, but who are unable to understand numerical quantities, make numerical comparisons, or understand approximate numerical relations. Dehaene's work is just one example of how cognitive neuroscience is advancing our understanding of how brain structures might support cognitive . Cognitive neuroscientists at numerous institutions are starting to trace the neural circuitry for other cognitive constructs and culturally transmitted skills. Several studies suggest that automatic and controlled processing rely on distinct brain circuits ( Raichle et al., 1994). Other studies how attention can reorder the sequence in which component cognitive skills are executed in a task: the areas of brain activation remain the same, but the sequence in which the areas become active changes ( Posner & Raichle, 1994 ). We are ning to understand the different brain systems that underlie language processing and their developmental time course ( Neville & Bavelier, 2002 ). Using our rather detailed cognitive models of reading – particularly word recognition –, PET, fMRI, and ERP studies allow us to trace the neural circuitry for early reading skills and to document the developmental course of this circuitry in children between the ages of 5 to 12 years ( Posner, Abdullaev, McCandliss, & Sereno, 1999 ). However, in most cases, we are still far from understanding how these results might contribute to advances in the clinic, let alone in the classroom. It is not yet clear how we move from results like these across the bridge to educational research and practice. The example does, however, make two things clear. First, there is no way that we could possibly understand how the brain processes numbers by looking at children's classroom or everyday use of numbers or by looking at math curricula. Second, there is no way we could possibly design a math curriculum based on Dehaene's results. It is the cognitive research that s both of those possibilities. When we do to understand how to apply cognitive neuroscience in instructional contexts, it is likely that it will first be of most help in addressing the educational needs of special populations. Cognitive psychology allows us to understand how learning and instruction support the acquisition of culturally transmitted skills like numeracy and literacy. Cognitive Psychology in combination with brain imaging and electrophysiological recording technologies also allows us to see how learning and instruction alter brain circuitry. It opens the possibility of being able to see and to compare these learning-related changes in normal-versus-special learning populations. Such comparative studies might yield insights into specific learning problems and, more importantly, into alternative, compensatory strategies, representations, and neural circuits that children who learn with greater difficulty than others in the traditional learning settings. might exploit. These insights could in turn help us develop better instructional interventions to address specific learning problems. Sensory information undergoes extensive organization into associative networks necessary for incorporation into texture of cognition. The normal operation of such a system allows for the integration of motor and cognitive of the kind that one sees in reading and language. Damage to or dys in this system of the kind often found in post-stroke individuals can be exemplified in disconnection syndromes such as alexia without agraphia and a color-naming deficit with no other form of anomia evidenced ( Leisman, 1976; Leisman, 2011; Leisman, Braun-Benjamin et al., 2014 ). This process of integration occurs along a synaptic hierarchy, which s the primary sensory, up- and downstream unimodal, hetero-modal, paralimbic and limbic zones of the cerebral cortex. Connections from one zone to another are reciprocal and allow higher synaptic levels to exert a feedback (top-down) influence upon earlier levels of processing. Each cortical area provides a nexus for the convergence of afferents and divergence of efferents. The resultant synaptic organization allows each sensory event to initiate multiple cognitive and behavioral outcomes. Upstream sectors of unimodal association areas encode basic features of sensation such as color, motion, form, and pitch. More complex contents of sensory experience such as objects, faces, word-forms, spatial locations, and sound sequences become encoded within downstream sectors of unimodal areas by groups of coarsely tuned neurons. Hetero-modal, paralimbic and limbic cortices, collectively known as trans-modal areas, occupy the highest synaptic levels of sensory-fugal processing. The unique role of these areas is to bind multiple unimodal and other trans-modal areas into distributed but integrated multimodal representations. Trans-modal areas in the mid-temporal cortex, Wernicke's area, the hippocampal-entorhinal complex and the posterior parietal cortex provide critical gateways for transforming perception into recognition, word-forms into meaning, scenes and events into experiences, and spatial locations into targets for exploration. All cognitive processes arise from analogous associative transformations of similar sets of sensory inputs. The differences in the resultant cognitive operation are determined by the anatomical and physiological properties of the trans-modal node that acts as the critical gateway for the dominant transformation. Interconnected sets of trans-modal nodes provide anatomical and computational epicenters for large-scale neurocognitive networks. In keeping with the principles of ively distributed processing, each epicenter of a large-scale network displays a relative specialization for a specific behavioral component of its principal neuropsychological domain. The human brain contains at least five anatomically distinct networks. The network for spatial awareness is based on trans-modal epicenters in the posterior parietal cortex and the frontal eye fields; the language network on epicenters in Wernicke's and Broca's areas; the explicit memory/emotion network on epicenters in the hippocampal-entorhinal complex and the amygdala; the face-object recognition network on epicenters in the mid-temporal and temporopolar cortices; and the working memory-executive network on epicenters in the lateral prefrontal cortex and perhaps the posterior parietal cortex. Individual sensory modalities give rise to streams of processing directed to trans-modal nodes belonging to each of these networks. The fidelity of sensory channels is actively protected through approximately four synaptic levels of sensory-fugal processing. The modality-specific cortices at these four synaptic levels encode the most veridical representations of experience. Attentional, motivational, and emotional modulations, including those related to working memory, novelty-seeking, and mental imagery, become increasingly more pronounced within downstream components of unimodal areas, where they help to a highly edited subjective version of the world. The synaptic architecture of large-scale networks and the manifestations of working memory, novelty-seeking behaviors, and mental imagery collectively help to loosen the rigid stimulus-response bonds that dominate the behavior of lower animal species. This phylogenetic trend has helped to shape the unique properties of human consciousness and to induce the emergence of second order (symbolic) representations related to language. Through the advent of language and the resultant ability to communicate abstract concepts, the critical pacemaker for human cognitive development has shifted from the extremely slow process of structural brain evolution to the much more rapid one of distributed computations where each individual intelligence can become incorporated into an interactive lattice that promotes the trans-generational transfer and accumulation of knowledge. The transfer of knowledge from the environment and the development of skills to interact with that environment is a direct consequence of the ability to organize physical and measurable associational networks. Examples of such networks for language are represented in Figs. 7 (A and B) and in turn represent language embodiment in the networks rather than language ascribed to one particular brain region. Figs. 8 represent the power of Connectography in measuring the efficiencies of practical learning based on graph theory and Connectography. (A) Multiple stream models of receptive language s organized into multiple self-organizing simultaneously active networks. (B) Grounded meaning indicates that the meaning of words and sentences are “embodied.”. (A) Characterization, Organization & Development of Large-Scale Brain Networks in Children Using Graph-Theoretical Metrics. (B) The graph on the left is a typically developing (TD) child (17 mo, 40%) and the graph on the right is of an at-risk, late-talker (LT) (24 mo., 10%). The network of the TD child s the 60 words in the child's productive vocabulary and the network of the at-risk LT child s the 61 words in the child's productive vocabulary. The apparent visual differences in the networks are supported by the differences in the corresponding table, with the typical talker's network showing higher clustering coefficient and higher median in-degree, but lower geodesic distance, than the LT. These differences are consistent at both the individual and population level (cf. Leisman, 2013). What we can learn from the characterization, organization, and development of large-scale brain networks in children using graph-theoretical metrics ( Leisman, Rodriguez-Rojas et al., 2014 ) is that al brain networks in children and young-adults small-world properties. In mathematics, physics, and sociology, a small-world network is a type of mathematical graph in which most nodes are not neighbors of one another, but most nodes can be reached from every other node by a small number of steps. Specifically, a small-world network is defined as a network where the typical distance L between two randomly chosen nodes (the number of steps d) grows proportionally to the logarithm of the number of nodes N in the network (Watts & Strogatz, 1998 ). Functional connectivity networks of brain from EEG ( Leisman, 2011) as well as MEG (Stam, 2004 ) have also been shown to possess small-world architecture. Large-scale brain networks in 7-9-y-old children similar small-world, al organization. Functional brain networks in children lower levels of hierarchical organization compared to young-adults. Children and young-adults possess different interregional connectivity patterns, stronger subcortical-cortical connectivities in young adults and weaker cortico-cortical connectivities in children. Large-scale brain connectivity involves al segregation and integration, stronger short-range connections in children, and stronger long-range connections in young-adults. In taking this concept further, we note that represented in Fig. 8 (A) and is a representation of al connectivity along the posterior-anterior and ventral-dorsal axes showing elevated subcortical connectivity and decreased paralimbic connectivity in children, compared to young-adults. This clearly demonstrates that the wiring and connectivities of young children is significantly different that that of teenagers. The change in organization of these connectivities directly speaks to the issue of optimization of pathways and is a direct consequence of training and therefore of education. In attempting to apply graph theory to an understanding of language acquisition, Fig. 8 (B) shows the responses of both typically developing (TD) and of at-risk, late-talkers (LT). There exists a significant and apparent visual difference in the networks with the TD's network showing higher clustering coefficient and higher median in-degree, but lower geodesic distance than the LT's connectivity networks of brain from EEG ( Leisman, 2011) as well and MEG (Stam, 2004 ) have also been shown to possess small-world architecture. Large-scale brain networks in 7-9-y-old children similar small-world, al organization. Functional brain networks in children lower levels of hierarchical organization compared to young-adults. Children and young-adults possess different interregional connectivity patterns, stronger subcortical-cortical connectivities in young adults and weaker cortico-cortical connectivities in children. Large-scale brain connectivity involves al segregation and integration, stronger short-range connections in children, and stronger long-range connections in young-adults. In taking this concept further, we note that what is shown in Figs. 8 (A) and (B) is a representation of al connectivity along the posterior-anterior and ventral-dorsal axes showing elevated subcortical connectivity and decreased paralimbic connectivity in children, compared to young-adults. This clearly demonstrates that the wiring and connectivities of young children are significantly different from those of teenagers and beyond and the change in organization of these connectivities directly speaks to the issue of optimization of pathways and is a direct consequence of training and therefore of education. Vision is thought to be like all other high level abilities and therefore does not involve a single process. The of early childhood play in general, and formal education in particular, is to integrate the various subsystems which compute information about the spatial properties of objects, movement, shape, color, etc. One type of visual processing is accomplished by the so-called ventral system because computations take place in the more ventral occipito-temporal and inferior-temporal cortices. This system has also been characterized as the “what system” ( Ungerleider & Mishkin, 1982 ) as opposed to the “where system” ing in the parietal lobe which focuses on object recognition. The two hemispheres are known to act differently in the way they encode shapes ( Leisman, 1976; Melillo & Leisman, 2009 ). It has been argued that many different s of vision could be achieved effectively if the system could encode information at multiple levels of scale ( Marr, 1982 ). In other words, one way to distinguish whether you were seeing an edge or just a change in texture is to determine whether changes in intensity are present at multiple scales. For instance, if they are noticeable with only high resolution, they are most likely texture variations; if they are present at multiple levels, they are probably edges ( DeValois & DeValois, 1988 ). The evidence from research at this time suggests that the two hemispheres focus on different types of features of visual input when forming object representations. The left hemisphere is thought to focus on smaller parts, higher spatial frequencies, or details. The right hemisphere is thought to focus on the global form, lower spatial frequencies or course patterns. To explain this asymmetry there are two theories that have been proposed ( Brown & Kosslyn, 1995 ). One, a structural theory, proposes that one or more processing subsystems have become specialized in the hemispheres. The allocation theory states that the hemispheres tend to employ different strategies that often produce these results but there are not specific structural differences between hemispheres. Visual processing can be divided into three phases of low, intermediate, and high levels ( Marr, 1982 ) and the hemispheres can differ in their allocation of resources at any of these levels. While, as in most lateralized s, each type of processing is found in both hemispheres, to different degrees, the hemispheres differ in the relative efficiency of the individual subsystems for a particular type of processing (structural theory) or in their predominance for using certain strategies (allocation theory) ( Brown & Kosslyn, 1995). The neurodevelopmental skills represented in pre-primary and primary education should be consistent with the normal development of visual processing. At the lowest level, subsystems organize the input so that distinct figures are separated from the ground. This processing takes place in a structure known as the visual buffer ( Kosslyn, 1987 ). Computations in the visual buffer specify edges, regions of common color, and texture, and other characteristics that distinguish one object from its background. It is thought that not all the information in the visual buffer can be considered in detail, therefore, some information is chosen for additional processing. This has been referred to as an attention window that can be focused to a specific size, shape, and location to a specific area of the visual buffer for more processing ( Treisman & Gelade, 1980). According to structural theories, the right hemisphere may more effectively detect large variations in light intensity over space. The left hemisphere more efficiently detects small variations in light intensity over space. This would suggest that the hemispheres differ in their sensitivity to different spatial frequencies ( Sergent & Hellige, 1986). At the intermediate level of visual processing, stimuli are organized into perceptual groups useful for later object recognition. In this model, the contents of the attention window are sent to a preprocessing subsystem in the occipito-temporal regions. Features such as texture gradients and color are found at this level ( Kosslyn, Flynn, Amsterdam, & Wang, 1990 ). According to the structural theories, the subsystem is focused to detect different kinds of information in the two hemispheres. For instance, a child may wish to look at an overall pattern like a face, whereas in other instances the child may want to look at one of its components such as an eye. The two s are not compatible with each other. The global needs to incorporate into a whole the same things that need to be separated out by the local process. Therefore, it may be more efficient to have separate process operate in parallel at the two levels. These concerns should be addressed in the approach that nursery, kindergarten, and primary teachers take in working with perceptual problem solving and with the organization of visually based classroom materials. High-level vision is concerned with matching input to representations in stored memory. It is thought that an object is reorganized when a match is made. In this same model, the output from the preprocessing subsystem serves as the input to the pattern activation subsystem found in the inferior temporal lobes. It is here that the perceptual input is compared to the stored visual information, and recognition is achieved if a match is made. If the input does not match a previously stored representation well enough, then the new pattern is stored. It is thought that size per se is not likely to be represented at the level of object recognition. It is thought that neurons in the inferior temporal lobe that are sensitive to high level visual properties are insensitive to changes in visual angle ( Leisman, 1976; Melillo & Leisman, 2009; Plaut & Farah, 1990 ). The hemispheres may differently in their ability for encoding parts in the whole or at different levels of hierarchy in a structural description. A structural description specifies how components are organized to compose a whole. The shape of a person is one example given to illustrate how parts are organized to compose a whole. In this example, a person is represented as a tree diagram, with the body at the top, head, trunk, arms, and legs as branches; upper arms, forearm, and hand as branches from the arm ( Marr, 1982). It is possible according to one theory that one hemisphere could store the (larger) wholes and the other could store the (smaller) parts. Another theory suggests that it is not size that is different but that the hemispheres store by preferred level of hierarchy. The left hemisphere may compute input farther down in a structural hierarchy whereas the right hemisphere may compute parts or wholes on higher or lower levels in hierarchy. Several experiments have been used to verify the al differences between the two hemispheres. These are useful to review because they emphasize the al differences in a practical way. In addition, the same techniques that are used to identify s can be used to diagnose dys, or if one hemisphere is decreased in activation as compared to the other. Additionally, if we know what each hemisphere responds to we can later use this information to concentrate rehabilitation on the performance of one hemisphere. One of the most common experiments (Heinke & Humphreys, 2003 ) involves use of letter stimuli that are most commonly used in global precedence studies; the features of the global-level object (e.g., two vertical lines and one horizontal line forming an H) are determined by the positioning of the local elements. Therefore, there is a confounding between size and level of hierarchy; the larger letter is made up of smaller letters. In this case, the term hierarchy refers to objects that are made up only of their constituent parts. For example, a dog's body is hierarchically structured because it is made out of head, trunk, legs, and so on; if these parts are removed, nothing remains. In contrast, patterns on a shirt are not hierarchically related to the shirt, if one removes them, the shirt would remain intact. In other experiments (Paquet & Merikle, 1988 ), investigators removed these confounding four letter stimuli. In addition to letter stimuli, picture stimuli could be employed that are not hierarchically arranged removing the possibility of processes that are specialized for reading. In most real world objects, global features provide general information about object identity, whereas local feature can be used to identify specific information. In one experiment ( Martin, 1979 ), stimuli were presented consisting of pictures of garments with smaller pictures on them, the larger pictures were not composed of the smaller ones, and therefore there was not a hierarchic relation between the two. The smaller pictures were also garments, providing the same types of objects at the local and global level without a hierarchic arrangement. In one divided visual feed study of global precedence ( Martin, 1979 ) it was found that the global (larger) was processed faster than the local (smaller) level in both hemispheres when the global and local level letters were different. However, the global level was processed faster following right hemisphere presentation than following left hemisphere presentation when subjects attended to the global level. In addition, there was a greater interference from the global level letter when stimuli were presented originally to the right hemisphere than when the subjects ively attended to the local level. In this case, the subjects evaluated the stimuli more quickly when they were presented initially to the left hemisphere than to the right hemisphere. To test whether local-global hemisphere specialization is different for hierarchic letter or non-hierarchic pictures ( Lamb, Robertson, & Knight, 1990; Martin, 1979 ), which would suggest high-level visual processing, subjects were shown two types of stimuli: letters composed of smaller letters and articles of clothing with patterns of smaller articles of clothing printed on them. Results showed that for pictures, subjects did detect targets at the local level faster when stimuli were shown initially to the left hemisphere than the right hemisphere; however, they evaluated targets at global level equally well with both hemispheres. This confirms a local precedence and a trend toward the expected pattern of hemispheric specialization. In comparison, although a global precedence was found for the letter stimuli, the effect was the same when stimuli were presented initially to the right hemisphere or to the left. For letters, subjects responded faster and made fewer errors when the targets were at the global level than when they were at the local level. In an attempt to explain the attention allocation hypothesis, Kosslyn, Chabris, Marsolek, & Koenig (1992) propose a specific mechanism where they suggest that the right hemisphere preferentially monitors outputs from neurons that have relatively large receptor fields. Also the hemispheres may differ in their ability to monitor the outputs from different size receptive fields even if the same outputs are available in both hemispheres ( Kosslyn, Chabris, Marsolek, & Koenig, 1992 ). Kosslyn et al. (1992) suggest that the bias to encode outputs from neurons with different size receptive fields allows the ventral (object) and dorsal (spatial) systems to be coordinated. It is thought that an individual during movement not only needs to know the precise metric distances of objects (dorsal) but also the specific shapes of objects (ventral). The two types of processing need to be linked and it is thought that this is best achieved if they are both in the same hemisphere – the right ( Kosslyn et al., 1992; Marsolek, Kosslyn, & Squire, 1992 ). In addition, when an individual attempts to identify objects, they may need to ignore variations in shape among specific examples and may need only to know the type of spatial relations among parts, not the specific positions of parts of a given object. For example, to reorganize the shape of a dog, one would ignore the type of dog and the exact position of limbs. The left hemisphere is thought to have a special role both in generalizing over shapes ( Marsolek et al., 1992 ) and in categorizing spatial relations ( Kosslyn et al., 1992 ). Therefore, differences in receptive field size may act to coordinate the encoding of shapes and spatial relations, and could therefore cause the right hemisphere to specialize in computing metric spatial relations and specific shapes and the left hemisphere to specialize in computing categorical spatial relations and categories of shapes. So far we have suggested that hemispheres differently in their effectiveness of focusing attention at scales of different size, and that the underlying mechanism involves sampling outputs from neurons with different size receptive fields. This is similar to a spatial-frequency hypothesis. In fact, the receptive field and spatial-frequency theories predict similar results. The two concepts are closely related. The smaller its receptive field, the higher the spatial frequency a cell will respond to; on the other hand, the larger the receptive field, the lower the spatial frequency. In this case it is thought therefore that a large receptive field, or a lower spatial frequency, is more of a right hemisphere , whereas small receptive fields and higher spatial frequency is more of a left hemisphere with the effects modulated by attentional variables. Normal processing of global aspects is thought to depend on the posterior superior temporal lobe of the right hemisphere. Normal processing of local elements depends on the posterior superior temporal lobe of the left hemisphere ( Lamb et al., 1990). Another important quality of visual information processing is the ability to localize a visual image in space. The observer is d to detect whether one object touches another object, this would be an on and off quality. Another ment is the discernment of near or far and above or below. There are hemispheric performance differences for all of these tasks. Studies have shown that there is a left hemisphere advantage for the on-off tasks and a right hemisphere advantage for distance judgment tasks ( Kosslyn, Sokolov, & Chen, 1989 ). Hellige and Michimata (1989) tested individuals by having observers indicate whether a dot was within 2 cm of the line (a coordinate or distance task called near-far task). For the above-below task, there is a left hemisphere advantage whereas the right hemisphere shows an advantage for the near-far task. Researchers have examined how these different asymmetries arise during the course of ontogenetic development. Compared to adults, the visual sensory system of newborns is especially limited in its transformation of information carried by high spatial frequencies ( DeSchonen & Mathivet, 1989 ). It has been suggested that the development of various brain areas is more advanced in the right hemisphere than in the left hemisphere at the time of birth and possibly for a short time after. Hellige (1993) postulates a certain critical period for incoming visual input modification, which occurs earlier for the right hemisphere than for the left hemisphere. Once modified by highly degraded visual input, the right hemisphere is not only predisposed to become dominant for processing low spatial frequencies but is also less able than the left to take full advantage of higher frequencies when they finally do appear. If this notion is accurate, then it would also follow that the resulting hemispheric differences in visual processing would influence asymmetry for any task that depends on the relevant aspects of visual information whether the activity s stimulus identification or stimulus location. Although studies of the adverse effects of deprivation on brain development are powerful and compelling, they tell us little about the benefits of enrichment. Much of what we know about the impact of early experience on brain architecture comes from animal or human studies of deprivation. As we work to clarify further the patterns of genetic expression d for normal neural structure, we have also recognized that an optimal level of environmental input or “expectable” environment must exist in parallel. Increasing evidence suggests that this “expectable environment” of early development s not only the variation in light necessary for vision, or the tones heard in a spoken language, but also the emotional support and familiarity of a caregiver ( Nelson et al., 2007). The well-documented negative impacts of deprivation on brain circuitry do not mean that excessive enrichment produces measurable enhancements in brain . A small number of case reports exist in which neglected children with very little language experience in early childhood were given enriched language exposure in a protective environment ( Curtiss, 1977; Itard, 1932 ). Longitudinal follow-up studies of these children demonstrated that, after several years of language exposure, they were unable to achieve adult-level native language abilities. Most recently, early intervention to correct a deeply impoverished early environment has been shown to greatly improve cognitive, linguistic, and emotional capabilities in humans ( Nelson et al., 2007; Windsor et al., 2007 ). Activity-dependent mechanisms of network formation may be responsible for such changes when children were placed into a stimulating environment for learning and exploration. With continued research into the modification of sensitive periods, as well as the factors influencing cortical plasticity throughout life, we may remain optimistic about the possibility of recovery from early deprivation. This in turn may provide hope for children who lack the biological framework, or necessary environment d for optimal neural and cognitive growth. The possibility of cognitive and neural rehabilitation leads to theories of enrichment beyond the norm to a level of enhanced development. Educational and environmental enrichment of preschool children from impoverished economic settings has been shown to improve cognitive measures through early adulthood ( Campbell, Pungello, Miller-Johnson, Burchinal, & Ramey, 2001 ). Cognitive capabilities, however, may follow a pattern similar to the growth curve of the human body; that is, while it is possible to enhance the environment of a child to assume the pattern of the normal curve, it is not possible to exceed the predicted trajectory to a significant extent without causing some potential harm. This is suggested by large studies of children from varying socioeconomic status, which demonstrated an improvement in cognitive performance only in those born to a low socioeconomic class, with no significant difference between those of middle-class and high-income families ( Jeffaris, Power, & Hertzman, 2002 ). If the possibility for enhancement exists, it is perhaps related to forms of enrichment that lie outside access to unlimited resources – i.e., beyond the expectable environment. Creativity, for example, is a key component to enhanced cognitive ing, yet we have not been able to define the neural processes or environmental attributes that can enrich this aspect of cognition, nor are there sure-fire ways of boosting creativity among the population at large. Similarly, exposure to art or music or great literature or horseback riding may not confer any evolutionary advantage (i.e., reproductive success), yet these activities may confer some advantage among certain strata of society. Thus, perhaps it would be useful to draw a distinction between enrichment as applied to those experiencing downward deviations from the expectable environment (such as those reared in situations of neglect or deprivation) and enhanced enrichment applied to those reared in typical (expectable) environments. Enrichment may lead to a restoration of typical development whereas enhanced enrichment may lead to exceeding the typical environment. Of course, a challenge here lies in accounting for individual differences, as some individuals have greater potential to benefit from art or music lessons than others. Individual heterogeneity may be under control of gene x gene, gene x environment, and environment x environment factors. For example, animal studies that epigenetic mechanisms – whereby environmental factors and experiences early in life can permanently alter the genome of an individual through chemical modification – will impact long-term cognitive and social-emotional ing ( Szyf, McGowan, & Meany, 2008 ). The field awaits translation of this type of mechanism into human experiences. Finally, how might the field of developmental psychology benefit from advances being made in developmental neuroscience? First, given that our genome contains many fewer genes than we surmised even a decade ago (approximately 20,000), and given advances being made in the field of epigenetics, renewed attention should be paid to the origins and elaboration of complex human behaviors. Second, those working in the field of intervention should take stock in what is now known about neural plasticity; for example, it is quite possible that we could witness a revolution in new treatment approaches based on what we know about the malleability of the human brain. Finally, for the millions of children around the world who their lives in adverse circumstances, we should be mindful of what is known about sensitive periods, and act with alacrity to improve the lives of these children before neural circuits become well-established and thus, difficult to modify. To borrow an analogy from economics, by investing early and well in our children's development we increase the rate of return later in life, and in so doing improve not only the lives of individuals but of societies as well. El cerebro humano se desarrolla desde la concepción hasta el comienzo de la segunda década de vida, empezando por los procesos más básicos. Es decir, se comienza por las funciones y controles vitales y autónomos, siguen los procesos cognitivo-motores sensoriales y perceptivos y culmina con los procesos de integración y toma de decisiones. El cerebro del niño recibe la influencia combinada de la genética y la experiencia. La capacidad del cerebro para modificarse decrece con la edad. Las capacidades cognitivas, emocionales y sociales están inexorablemente unidas a lo largo de toda la vida. Las funciones cognitivas y motoras interactúan en nuestro cerebro como consecuencia directa de nuestra postura bípeda. La presencia de tóxicos daña la arquitectura cerebral, lo que puede conducir a problemas para el aprendizaje, la conducta y la salud mental y física de por vida. El entorno del niño afecta directamente a la sinaptogénesis y permite la optimización neurológica. Una de las primeras consecuencias de estos principios es que el papel de las escuelas infantiles y de los profesores en esta etapa de la vida resultará crítico para el establecimiento de unos sólidos cimientos funcionales que serán necesarios para el desarrollo ulterior del niño y para la neurobiología del adulto. Se ha constatado que el ambiente del niño tiene un impacto directo en los tiempos y la naturaleza de la expresión génica que afecta directamente a la arquitectura cerebral del niño. El desarrollo cerebral, cognitivo, sensorial y perceptivo no ocurre simultáneamente sino más bien en diferentes etapas de desarrollo, como se puede apreciar en la figura 1. Lo importante a destacar aquí es que cada una de las capacidades perceptivas, cognitivas y emocionales se fundamenta en el andamiaje proporcionado por las primeras etapas de la vida. Todo esto ocurre porque aunque existan instrucciones genéticas para formar numerosas neuronas y sus conexiones, esto ocurre de manera no direccional, es decir, sin especificación de su forma concreta. Es más bien la reorganización epigenética posterior (la estabilización sináptica, consecuencia del uso y la experiencia) la que determina la forma y configuración de los circuitos neurales del cerebro. En este sentido, la neurociencia ha demostrado la estrecha relación que existe entre el grado de enriquecimiento ambiental y procesos neurobiológicos tan importantes como la bioquímica cerebral, la gliogénesis, la neurogénesis o la arborización dendrítica, entre otros. De ahí la crucial importancia de la experiencia en las primeras etapas de la vida. Se sabe además que los precursores de lo que posteriormente serán las distintas áreas funcionales del cerebro emergen aproximadamente durante los dos primeros trimestres de la gestación en el humano. Esta organización es la base sobre la que se fundamenta el detallado desarrollo posterior de los circuitos neurales. Por lo tanto, estas etapas del desarrollo serán también fundamentales y cualquier influencia sobre las mismas (p. ej., exposición a tóxicos) puede tener consecuencias de largo alcance. Es cierto que nuestro cerebro es más plástico que el de muchos otros organismos, manteniendo esta propiedad incluso en el adulto. Sin embargo, el fundamento de la arquitectura cerebral del adulto se establece en edades tempranas y se sabe que la influencia del ambiente durante la infancia es tremendamente significativa para los mismos procesos sensoriales de la percepción. Y es sobre estos procesos sobre los que se fundamenta el funcionamiento de los procesos cognitivos superiores alcanzados en etapas posteriores y utilizados en la adultez. Cada sistema sensorial y cognitivo tiene su propio período sensible y los que se alcanzan más tarde tienen su fundamento en los alcanzados previamente. De ahí que unas mismas condiciones ambientales puedan tener efectos muy distintos dependiendo de la edad del niño de la que hablemos. Efectivamente, los circuitos neurales de alto nivel, que llevan a cabo operaciones mentales sofisticadas, dependen para su correcto funcionamiento de la calidad de la información que les proporcionan los sistemas de bajo nivel. Los sistemas de bajo nivel cuya arquitectura haya sido moldeada por experiencias saludables en las primeras etapas de la vida proporcionarán a los circuitos de alto nivel información precisa y de alto nivel. Ésta, combinada con una experiencia rica y sofisticada en etapas posteriores de la vida, permitirá que la arquitectura de los circuitos implicados en funciones cognitivas superiores alcance todo su potencial genético. Por lo tanto, el aprendizaje temprano es el fundamento del aprendizaje posterior y es esencial (aunque no suficiente) para un desarrollo óptimo de la arquitectura cerebral. Dicho de otra forma, una experiencia temprana enriquecida debe de ir seguida de más experiencia enriquecida y sofisticada, especialmente cuando los circuitos de orden superior están madurando. La neurociencia ha constatado, en este sentido, que el metabolismo cerebral de la glucosa entre los 3 y los 10 años de edad –período que se corresponde con una etapa de conectividad neural exuberante– es de en torno al doble del de un adulto. El enriquecimiento ambiental debería ser por tanto sinónimo de educación infantil. Para los millones de niños de todo el mundo que comienzan la vida en circunstancias adversas, deberíamos tener presente lo que se sabe acerca de los períodos sensibles del desarrollo y actuar con diligencia para mejorar su vidas antes de que los circuitos neurales se establezcan y afiancen y, por tanto, sean más difíciles de modificar. Empleando una metáfora en términos económicos, podríamos decir que al invertir bien y pronto en el desarrollo de los niños incrementaremos la cantidad de retorno en etapas posteriores de la vida. De este modo, no sólo mejoraríamos la vida de las personas sino también la de toda la sociedad. This work was supported in part by the government of Israel through the Kamea Dor-Bet program and by the Children's Autism Help Project-USA. Trajectories of brain development: point of vulnerability or window of opportunity? Do early-life events permanently alter behavioral and hormonal responses to stressors? Cross-modal plasticity: where and how? Cognition and instruction: 25 years of progress. Critical periods and strabismus: what questions remain? Itard, J. M. G. (1932). The wild boy of Aveyron. (Translation by Humphrey G. and Humphrey M. translators. New York: Century, 1967. Original work published 1801). Processes of Change in Brain and Cognitive Development. Attention and Performance XXI. Functional connectivity patterns of human magnetoencephalographic recordings: a ‘small-world’ network?
2019-04-19T04:32:55Z
https://journals.copmadrid.org/psed/art/j.pse.2015.08.006
JOSEPH CATES: This is Joseph Cates. Today is July 18, 2016. I'm interviewing William S. Gannon. This interview is taking place at his home in Bedford, New Hampshire. This interview is sponsored by the Sullivan Museum and History Center and is part of the Norwich Voices Oral History Project. Do you go by Rev. Gannon? WILLIAM S. GANNON: Rev. Gannon, Father Gannon, Mr. Gannon or Bill. JC: [Chuckles] Or Bill. Okay. JC: Well, I'll tell you what, tell me your full name. JC: And what's your date of birth? JC: Okay. And where were you born? WG: In Manchester, New Hampshire. JC: Okay. And what Norwich class are you? JC: Tell me about where you grew up and what you did as a child. WG: Well, I grew up in Manchester, New Hampshire for the first 6 years. And being born on the day that the whole country celebrated Memorial Day, which was always May 30th, whenever it fell. We lived opposite Stark Park. And there were cannons at Stark Park. The Gannons lived by the cannons. And the parade ended at Stark Park. And when I was three years old, they shot their guns off three times. So, I of course, assumed that that was in honor of my birthday. And when I was four and they still shot them only three times, I was upset. WG: (Laughs) So, that was the first part of life here. I still have my three-year-old nursery school report and I'm very impressed with the quality of the thinking of the writer of the report. It was a page and a half. And I was amused by some of the comments that every dog I met I thought was my own. And, that when I was asked to do something I didn't understand, I would cry. But once it was explained to me, I was alright. I love to say, "And nothing has changed." WG: (Chuckles) And I guess I feel especially blessed by both my early -- my preschool education, which started at the age of three and my musical education which started before I was born because my mother was a concert pianist and the church organist and a teacher of piano. So, I was hearing Bach, Mozart, Beethoven and Chopin, and Romanov and Debussy before I was born. And much later in life is when I had a very deep and still do have, a love of progressive jazz. That's the jazz from the 40s, 50s and 60s and 70s I'd say. I read somewhere something that lead me to realize that my hearing Debussy early on had set me up for the cords that are present in modern jazz. And recently some social psychologist was telling me that when babies are adopted at the year of one year old from Russia, they come to this country, something that is often unanticipated by the parents is, all they have heard, even though they aren't speaking yet, are the sounds of Russia, the Russian language. They have to pick up on the sounds of English language. As adults, we tend to think that language is only important once you start speaking, but clearly, it's important even before you're born, you're hearing sounds from people's speech. So, I really thank my mother. She started me on the piano at age five and I still play but not publicly, on the piano. It never took with the seriousness that I wish it had. And I went on later, that was 11 or 12 to a piano teacher, another teacher in high school and nothing really got started until I took up the trombone in high school. But, my mother was very important to my early life, I now know, in ways that I didn't always appreciate when I was growing up and when I was an adult. We moved to Concord when I was six. I went to the first grade in Manchester. And, then we moved from Concord to Chester, New Hampshire, when I was ten and that would have been 1946. My father had always been, or for a long time, a grain salesman and he also owned a couple of grain stores. And he had bought a coal company in Derry, New Hampshire, and stopped his traveling. He worked for a grain company, a national company that sold to grain stores called, Park & Pollard. And their slogan was Lay or Bust and on his stationary, there was a picture on one side, at the top, of a chicken laying an egg. And on the other side, of a chicken busting apart. And in between was the slogan, "Lay or Bust." And, I kind of felt delighted in realizing how profoundly in the 20s, 30s, 40s when he was on the road as a salesman, agriculture was where most people earned their living and got their sustenance. And it was coming to an end that was probably part of 60, 80 maybe 100-year decline in this country. So, that was partly brought home to me, as I think back. When I was 11, I believe it was, he bought a chicken coop and got 25 little chicks, and grew them. And, I became their keeper. And, I had an egg route. And then the next year, we added onto the garage and I had the use of a horse and it was borrowed from a company that rented horses out during the summers; summer camps and places like that. And I'm surmising that we did them a favor by feeding and boarding the horse for the winter. And they did us a favor in giving me a horse to ride. And that was all part of the fact that my father had been in World War I in the cavalry, which sounds amazing. And that's partly probably why Norwich's cavalry past had some appeal to him and to me. And that's partly how we got the horse. So, in high school, which was Pinkerton Academy in Derry, New Hampshire, I guess I had a somewhat uneventful time. I played football on the varsity team, beginning my junior year and also my senior year. And then, when I came to Norwich, it seemed as if everybody was too big on the football team and I was heavily into the trombone. And I had practiced eight hours a day, as I noted in a piece that the Norwich Record had published, because I was afraid I wouldn't make it into the Norwich band. And -- but I did. And, the trombone was the important thing to me and I can remember, and I think I mentioned this in the article, being at an alumni reunion and standing at the old SAE house, where I had been a member, with three or four other alums who I didn't know until that moment, and they were talking about the sports they played at Norwich. And they turned to me and said, "What did you play?" Then I said quite proudly, "The trombone." So, I started thinking I was going to be a businessman in my father's business. I'd worked part time, and on Saturdays for him, from the age of 13 on up to when I left for Norwich. And, it turned out that an ambition of my mother took over. So, in my sophomore year, I changed my major to history in preparation to going to law school. My grandfather had been a New Hampshire chief justice and the William Sawyer in my name was his name, William H. Sawyer. So, that lasted through a couple of years at Norwich, even up into my senior year. I'd been accepted at law school, but changed my mind at the last minute to go to seminary. And that was the influence of an episcopal church chaplain who was also a professor at the school of a number of courses that I took, and I just had a very deep interest in the subject matter, and those courses included Old and New Testament, one course for each. And, ethics and there was a political philosophy class that I took that was also, I would say, in the philosophy direction. And it was basically a love of the subject matter that brought me to seminary. I was commissioned in the signal corps. So, that was deferred for four years. Normally a seminary education is a three-year event, but I stayed for an extra year and got two master's degrees when I graduated. Actually, one was -- the first three years was then a bachelor and was later changed to a master's degree. It was a Master of Divinity. Christ Church. It was in Glen Ridge, New Jersey. And, it had a reputation of being a rector, that's the position I had, a rector killer church. My immediate predecessor had been in there only three years. He was fired by the bishop because he first divorced his wife, kicked her out of the rectory and brought in some other woman. And, of course, enraged the congregation with that behavior. So, the bishop did what he should do and fired him. And, 30 years prior, this was 1991 when I went there, the rector had had some involvement with, probably a parishioner. He was married with children. And in a New York City hotel, he killed himself. church. Its impact was so profound. I met somebody that had attended the church for eight months after that event and did not know about it, indicating that nobody talked about it. It was too painful to communicate. So, I was taking -- I knew I was taking on a church that was a tough place and it took, I would say, a good three to four years before things really calmed down and we got going again. And, when I retired in '03, I continued to do part time interim work as a priest in Episcopal churches. And I realized very quickly that when you come newly into a leadership position, whether it's a church or something else, you are inheriting a great deal and the trust relationship that either did or didn't exist with the prior administrator, is going to bedevil you or bless you. And, places where there's been a profound leadership, I discovered it was very easy to come in and I would be immediately trusted and we'd get going and have fun. And places where there had been a succession, would have to be more than one succession of bad leadership, it was going to be a battle of sorts to exert any kind of leadership. And, at this point, I'm just a pew sitter. (Laughs) And enjoying it. JC: -- and we're going to fill in some questions. You talked a little bit about why you chose Norwich. Can you elaborate more on that, why you chose to go to Norwich? WG: Well, I think I chose mostly because of my father. I'd had relatives that went to Dartmouth, and perhaps -- and UNH. Perhaps that would have been my mother's choice. But, it was the military that intrigued me. I had a cousin who had been in World War II and I worked with him -- he worked for my father. He was about 10 years older and I had, just a high regard for him and I would guess that it was the military side. And I had a classmate, Harry Parkinson at Pinkerton who also got interested in Norwich. And, I remember him saying that he had had an uncle who'd been a soldier in World War II and had died. And I think that was part of his interest in going to Norwich. JC: And you said your major, you majored in history and you kept with that? JC: Why do you think you chose history, particularly? WG: And went from New York City to Canada (chuckles) -- came back through New Brunswick, through Maine at a certain point later, several generations later. That kind of had something to do with it. And, I guess other than I'm -- I still love history, read a good amount of military history. I sort of think I may be drawn to military history as one who hadn't served because when I got out of seminary there was nothing happening. And, I think if I had thought I should go into the service, it wouldn't be as a chaplain, it would be in the signal corps where I'd started out. I'm not sure if that would be true. And, where was I headed with this -- what was the question again? JC: Why you chose history as your major. WG: And it made sense that they were talking because they knew the people they were talking to would understand where they're coming from. That was their military service. And I wonder if maybe my father's -- he was in every battle in World War I, in Europe and was never wounded. So, I sort of grew up with hearing all that kind of stuff. JC: Was he in the first division or was he in the 76th? WG: The 76th Field Artillery Horse Drawn Cavalry. That's where the cavalry part came out of there. But he trained with horses. JC: Who were your roommates at Norwich and where did you live? WG: It was Jackson Hall. I can picture them. I'm not sure I can remember their names. Harry Parkinson was one. And there was a kid from Vermont that went on to West Point after the first year. And we were all bandsmen. And there was a guy, Lemons was one of the guys. He was an upper classman. That was in a subsequent year. But that leads me to an event that happened, I think, in my junior year, when there was a shooting in a room. I think I was on the first floor of Jackman. And across the hall, a guy named Tony Reddington, was with a roommate who had a .45 pistol. And an upperclassman of mine, Norm Elliott, came in the room and saw it and said -- the two guys being rooks, "Let me see that." Pick it up. Took the clip out of the handle and aimed it at Tony Reddington and pulled the trigger. And it hit him in the body somewhere. Just unthinkable behavior. You would think. So, he, Tony was taken by ambulance to Hanover. The first successful aorta transplant kept him alive. He was able to survive about an hour's trip at least. However long it took the ambulance to get there and he came back to the school I think the next year and graduated. I'm pretty sure he graduated. WG: First time I was seeing him I think, since Norwich. And there he was. JC: Now, you were in band company. JC: What can you tell me about band company? WG: Well, I'm sure we had -- I'm trying to think if we ever played our instruments. I think we did. But I'm not sure when we might have done that. We got to play quite a bit, as a band. And, I think that was daily, which is important to do. I still play the trombone every day, because I play in a couple of concert bands. And I also play in a swing band. A couple of different ones. So, that was an important aspect because you have to keep your armature up if you're a brass player. And we would be playing for the bringing of the flag down. And that would be a daily event. And one of my favorite stories and memories of a time when our band had a major leader, not the professional guy but the cadet, determined that he was going to have a yacht cannon that would shoot, just a blank, and it was positioned under one of the real cannons by the flag pole, and nobody knew that he was going to be doing this, that we were going to be doing it. And he had explained to us, probably about this time, that the bass drum was always hit, this was something we did to simulate a cannon going off. And then we would start with the National Anthem. And on this occasion, I remember seeing a rook standing at attention, holding a string. His arm was up, he was holding a string and he was going to pull the string on -- connected to the yacht cannon. So, he was given the command. And he pulled the string. And there was this huge roar and blue or black smoke and we started playing. And I remember looking because the trombones are in the front line, so I remember seeing both columns of cadets down the parade ground. And I was looking at the ones on the left as we faced east, I guess, and the whole column jumped at the cannon sound. And I'm sure the same thing was happening on the other side. There were three regimental officers in the middle and the cannon was sort of aimed at them. I'm not sure of this, but I believe I saw them leave the ground. WG: And held the salute for the duration of the National Anthem. (Chuckles) Well, our leader got fired from his -- he was reduced from a sergeant to a private. And, (laughs) was discipline. I'm not sure how else he was disciplined and eventually became a leader again. That's a story worth -- and, that was the beginning of a tradition of a 105 howitzer being deployed in the things that take place with the flag coming down on the parade ground. JC: Okay. Now, you said you didn't play any sports, you just played the trombone. JC: And, did you participate in any other activities? WG: I skied, but not on the ski team. And, I think that was part of the appeal of Norwich. And back then, there was a ski slope right across from the school. And, on a Saturday for sure and on Sunday, you could just walk across with their skis and just ski. And I remember that those of us in the signal corps course were a part at Mt. Mansfield, of setting up a communication system for some ski races that occurred there and to do that of course, we all got free skiing (laughs) as part of our setting of it up. JC: What did you do to relax when you were at Norwich? WG: Well, I think an important part of my Norwich experience was the fraternity life which we -- we joined fraternities -- was it our freshman year? I think so. And if it wasn't, it was the sophomore year. Because we ate in -- the mess hall is the current chapel, and after the chapel mess hall, it was in the fraternities that most, but not all, that most of the school had their meals, lunch and dinner. WG: And I'm sure he knew what was going on. And I would have to say, I would expect that the benefit in part was, and I don't know if anybody's studied this, but I'll bet there was a minimum of drunken driving accidents on the highways if all the drinking was happening at the school. So, the social life centered -- and Vermont College was a place where we got dates. Sometimes we went south, I can't remember the name of the school or the town, but it was in south Vermont. Some guys went to New York state for drinking purposes, because you could drink at 18 in New York state. WG: And I swear I could see the bullet flying through the air! WG: (Laughs) And I know that there were some others who -- I didn't go hunting. I hunted squirrels when I was growing up in Chester. But some guys were hunters and that was part of the relaxing. I played the trombone in a dance band, The Grenadiers. There was some pick up jam sessions. I remember a classmate who has become a famous military historian, Carl Estes, Este, I'm not sure which it is. WG: Este, right. And he played the jazz guitar in the group. So that was -- I've always been a big reader. Tony Reddington told me when I saw him that he started reading Soren Kierkegaard because he saw a copy in my hip back pocket of the paperback, by that Danish existentialist philosopher. JC: What fraternity were you in? WG: SAE. Sigma Alpha Epsilon. JC: Okay. And, tell me a little bit more about The Grenadiers. WG: Well, it was a dance band. I think there were -- there's a full sax section or if not full, at least almost. Which would mean four saxes, full would be five, usually. And there were either two or three trumpets. There were two or three trombones. Maybe there are four, I'm not sure. Double bass, stand-up bass, drums and I'm not sure if we had, we probably had a pianist. And that was the standard -- maybe also guitar, I'm not sure about that. That was the standard makeup of dance bands in those days. Still is for that matter. And, I don't remember -- we must have played for dances. I don't remember doing it. But, the music was fully, I would have to say, at the top of my relaxing moments. I can play the piano. When I was 12, I had lessons from a jazz piano player who taught me the chords and I had -- as I said, this was on the piano, of all the chords. So, what happens is, you can get what's called a fake book which has the melody line and the chords. Guitar players use them, of course. But on the piano, you can play the chord with the left hand, melody with the right. And, I used to do some of that stuff in the fraternity house on the piano. And I remember one fun time at the fraternity house, at a party, they had -- I didn't have anything to do with this -- but they had taped the girl's restroom. And at the conclusion, after all the dates had been taken home or left to however they got home, I mean, I think it was around 12:00 or 12:30 at night, we gathered in the kitchen to listen to the tape. And we roared with laughter when we heard one girl say, "This party shits. Let's go down to Dartmouth where they really know how to party!" JC: Do you remember any particular song that y'all would play? WG: Songs? Well, the songbook back then, which is still true for me now, "How High the Moon," "Sunny Side of the Street," "Body and Soul," "There Will Be Another You," "The Very Thought of You," and all those. I mean there are about -- there's got to be over a thousand of them that are in my head. JC: What about some Norwich songs? WG: Well, there is the school song, which I don't think I ever fully learned the words to. JC: What about "On the Steps of Old Jackman?" WG: Is that a song? JC: Do you remember that one? WG: I think that's since my time there. And I remember it being sung at some reunion recently. JC: That's one a lot of people sometimes mention. What instructor -- who were the instructors who were most influential to you during your time at Norwich? WG: Well, Rev. Hershel Miller was one, and he was the priest of a small Episcopal Church in Northfield as well as on the faculty of Norwich University in the religion department. There was a Roman Catholic priest who taught courses in the religion department and Hershel and that was the makeup of the department. In the -- the head of the history department was a Dr. Morse, who was a Harvard graduate, I'm pretty sure. And, my -- I took a number of courses, and the name is escaping me, but he was published. He was Eisenhower's historian. [Albert Norman?] And probably the name will come to me. And he lived a long time after retiring, and always sent me Christmas cards. And, I wasn't always an "A" student in his classes, usually a "B" student, I guess. But he seemed to have taken -- I think he liked the fact that I went on to seminary. Eber Spencer was the government professor that I had in philosophy -- political philosophy course. And he wrote my recommendation for law school. I was very fond of him. There was an English teacher who was the -- this was a big part of my life were the Pegasus Players. And the advisor for the Pegasus Players, I think his name was Nelson but I'm not sure. But, in my sophomore year, a friend got me involved in the Pegasus Players and a play called "Time Limit." And, for some reason, I got to lead. I don't know why. And that was the beginning of -- that changed my life. WG: And it's been true in my teaching and church (?) [0:47:04] life since I tended to be somewhat entertaining. JC: What were your favorite classes and least favorite classes? WG: And I don't pick up on stuff, which could mean I should never fly an airplane. JC: (Laughs) Probably so. What do you remember about being a rook? WG: Well, I remember being yelled at. I remember, I almost didn't come back. And, I think that that was partly -- I got one -- I remember getting 16 demerits one month. 12 was the limit. And for every demerit over 12 you had to march with a rifle for an hour around the parade ground. So, when I was doing my four hours, I was saying to myself, "This will never happen again," proving that harsh punishment can educate. I remember, but this was true later on too, but I remember feeling somewhat awed and admiring of the senior leaders in the barracks. The company commander and the first and second lieutenant. And I remember in the junior ROTC summer camp, which was Ft. Gordon in Georgia for me in signal (?) [0:50:43] corps, finding one of my first-year cadet officers who had inscribed his name in the firing range. When you were firing, you were behind the targets, underground, the bullets flying over your head, and it was a great pleasure that I saw that. And I have since made a great deal, I think, in my own mind, and to a few people who are considering Norwich, of the importance of the cadre that first year. And I believe that it is somewhat rare today for young people whose peer group up through last year of school, is their age group, and that's somewhat adjusted by the Norwich experience because your peer group at Norwich, your first year is your age group and then the rest, older cadets who are teaching you and that makes a lot of sense to me. And whether they're being nice about it or not, you still learned how to make the beds the way they wanted you to and shining your shoes and polishing your brass, pressing your pants and shirt and where to keep stuff in a drawer, in a bureau drawer in the room. And the other aspects of getting ready for a daily inspection. And I think, generally, post-Norwich thinking, that most people, it's not until they hit the work world, that their peer group is other than their age group and it makes it, in my mind, much more important to have intergenerational experiences. This is true in the music world. And I think when you learn an instrument you have a non-parent teaching you how to play something, that's different. And parents are probably not so good at teaching because they have such an emotional investment. And when I was teaching in my private schools, three of them being boarding schools, I always thought that we teachers were doing a better job of parenting because we didn't have the emotional investment that the parent has. And very recently I've read that up until the 1970s, the nurturing community in a family wasn't just the two parents. It was grandparents, aunts and uncles, cousins, older kids, non-relatives that were functioning as aunts and uncles and somehow, at some point, maybe it's not the 70s, maybe it's the 50s, who knows. Life changed in the nurturing experience growing up, which could make the Norwich experience that much more important. JC: Now, you said you got 16 demerits. Do you remember what you did? WG: I don't. It was sloppy, whatever it was. I didn't -- I may have missed a class. That was worth two. And, I don't know, if I didn't shine my shoes or something. But it was dumb stuff. And the -- I can remember coming back as the sophomore and how happy everybody was. And when we visit Norwich, we -- and they mix the cadets up with the visiting people, it seems as if the cadets all have a very high spirit of being at ease and happy and on top of things and I think that's part of, that's part of the musical experience, is gaining mastery at an early age over something. And somebody's written a book recently called Grit, I don't know if you know of it. JC: I've heard of it. WG: You've heard of it. And she's a social psychologist. And her main point, which is present in advertising for the book is, it's not the smartest people who become the most successful. It's people who've learned perseverance. And I think that's part of the Norwich experience for those who don't drop out. JC: What was your favorite part about Norwich? WG: Going back. (Laughs) And not being part of the cadet corps. WG: (Laughs) I guess the mess hall was a favorite part. The fraternities were a favorite part. I loved the parading, in the band. That was a favorite part. Still, when I hear a marching band drums, I get a special tingle. And the two bands that I play in, we're playing mostly serious and semi-serious music. Stuff like medleys from Duke Ellington or Broadway show medleys, that kind of stuff, but we also play marches. And I always enjoy playing the marches. And, I think the dance band is the direct descendent of the marching band. JC: What was the most important thing you think that Norwich taught you? WG: Good question. I would think it was perseverance. Now, that's somewhat influenced having just read this book. But, I tend to -- well I'll tell you a musical story. I was living -- I was single, living -- having broken up with my wife, in Peekskill, New York and all -- forever after Norwich, I was always active as a musician, mostly in jazz swing bands. WG: Which, of course I was. And the thought now of the effort I went to get from Peekskill to the gig and back, was rather extraordinary. But it has been true of my life generally that I push hard. JC: Norwich's motto is "I Will Try." What does that mean to you? WG: I mean, it's clear that some ad man hasn't designed it. But I actually think, my second and third thoughts about it is, it's pretty good. And I just read that infants -- we saw a 12 month or 14-month-old boy in a restaurant waiting room with grandparents, parents surrounding it and he was standing with his arms out, back and forth as he maintains his balance. Is he going to take a step, or isn't he? That being hugely entertaining to the family and everybody else. That infants have to try again and again and again and they don't experience shame or failure. So, that could say that one of the more inhibiting aspects of adult life is when we fail and get all hung up over it, rather than trying again. And, it turns out, in science and in life generally, so much of the best stuff that happens, happens because you don't give up. WG: I think that there is a national community that is being addressed by that identity. And the contributions that we make as citizens to our national life are going to all be happening locally, to be sure. But, we are citizen soldiers in any -- in many of the contributions that we make whether it's in the military or not. And, I just think -- especially at the late adolescence early adulthood stage of life, there are advantages to the military experience. I had a cab -- a driver from an automobile company give me a ride home while they fixed my car and she'd just gotten out of a four-year air force stint and she told me -- I asked her if she'd gone back to college because she had gone into the air force after high school. She said she had tried community college but it just didn't take, and my sense of it was that she couldn't stand the people she was going to school with. That they didn't have the dedication and seriousness that (inaudible) [1:05:21] the air force had had. And I've also read recently, I don't know if you've read Sabastian Junger's book Tribe but I can recommend it. It's short. The pages are short. And it's about our society and its brokenness and how people coming out of the military, coming from such a self-sacrificing, dedicated community oriented life into a me-too-ism, lack of community life in our country generally. And he's attributing that, rightly or wrongly, I'm not sure, but it makes some sense. Attributing to that, the post -- PTSD depression. He points out that after 9/11 in New York City, the murder rate was cut in half, the suicide rate was cut in half because it was such -- it was a greater sense of community. And I think Norwich has that sense of community that he's saying is missing. So, maybe Norwich people should be prepared for the dysfunctional world they're entering and how to cope with it. JC: Now, after graduation, you went on to seminary. You never did join the military. WG: Well, I think that's the same question as earlier. I think it prepared me for perseverance. I was a preacher at Norwich after I graduated from seminary. And, Herbert Spencer, my philosophy politics teacher, told me after the sermon that he was just amazed at how much more mature I was than I was at Norwich. And I believe that this may be true of graduate study generally that you learn to think in a more disciplined way than you did in college, which is not a commentary on Norwich necessarily but perhaps on our expectations of what college is supposed to do. And my experience in graduate school was reading a -- I'm a big reader -- and when I got there I took a speed reading course knowing that a huge amount of time was going to be spent reading. And it was very effective. But I believe part of what was happening to me was, in seminary I was learning how other people of great skill think. Doesn't mean that I bought their thought, but I knew how they were thinking. And I think that's -- that was something that I -- I would have to say that whatever I learned at Norwich, that deepened the thinking aspect of life that I received. WG: Well, that's good. I don't know. It could go back to perseverance. I've been a very outspoken person in my professional life. And, I think that could have been nurtured at Norwich, calling a spade a spade when I would see it, regardless of the consequences. And I think I sort of have a reputation in that way. Some people tell me that they are amazed at my courage, that I don't seem to be scared by what other people are scared of. And I think I was a very fearful person my first year at Norwich. And that may have -- when I went to the summer camp training, I had what I regard as a very important experience. There were about 750 or 800 cadets. And we were taken into a field and told to yell as loud as we could. And so I was chosen of the three to be the regimental cadet colonel for marching all of the cadets from the barracks area to a parade area and doing the parade thing and marching back. And a regular army lieutenant took me -- I didn't have any misgivings about this because part of the Norwich experience, even if you weren't in a command position, I was a private all first three years, was that you've seen people do this over and over again. So, you're ready to mimic what you see. And -- but he took me over the trip I'd be taking, so we rehearsed. I knew what the commands were going to be but I'd hadn't known where I'd being going. So, we rehearsed the whole thing. WG: I want to get onto other things. JC: Do you think being a Norwich graduate opened doors for you that wouldn't have been open otherwise? WG: Well, that's a good question. I don't know. I also would say, and I think it's important to know this, that when I was there, partly the influence of this religion teacher, there were a high number of Norwich guys that went to seminary, and it could be partly the military because a big part of Sunday morning life is ritual. But -- and I think it could be the emphasis on surface that the military had and Norwich has. And I don't know what the situation is now. I don't think being a minister today has the social significance that it once had. It's not something everybody's dying to do. But that, I think there was a -- I would think that probably more people were going to seminary in those old days from that school than perhaps from others. JC: Do you think Norwich graduates have a special bond that other military or civilian schools don't have? WG: Oh yes. Yes. Yes. JC: -- that kids have very close knit bonds. WG: Yes, yes. That's also true in my jazz life. You meet a jazz musician anywhere, you're totally at ease. And he may be totally untrustworthy but you don't know that and you're willing to trust him until he proves otherwise. And, yes, I would guess that the Norwich -- certainly the band's people at Norwich this is true of. And it's just partly because you know -- you both know what the other one has been through. And, to a certain extent, I wouldn't be surprised if the same thing is true for all of the Norwich graduates. JC: Now, have you been involved with Norwich since you graduated? WG: Yes. General Todd. And, I've occasionally gone to the send offs of, and to the occasional (sic) when Schneider came to Bedford within the past year. And, that's what I think. JC: Do you stay in touch with any of your classmates? WG: -- cadet. And I just don't agree with that at all. I think there's a lot of stupidity at work in the anti-Muslim feeling. And the real situation is that Saudi Arabian Wahhabism which merged the tribal culture of Saudi Arabia with Islam and which has been exported both to this country and to other parts of the world which has resulted in ISIS and such a bad reputation for Muslims. But, there you go. JC: What advice would you give a rook today on how to survive and thrive at Norwich? WG: (Laughs) They should try. WG: Whatever it is. Keep trying. JC: Now, did you have any other relatives that attended Norwich? JC: Is there anything else you'd like to add or have a comment? WG: I'll probably think of it after you leave. JC: That's generally the way it goes. Alright, well I thank you very much for this interview. WG: You're welcome. It's been very enjoyable.
2019-04-21T10:19:51Z
http://numuseum.com/viewer.php?cachefile=2017_134.xml
The number of shares of common stock outstanding at July 14, 2017 was 694,689,883. The consolidated condensed financial statements included herein have been prepared by Danaher Corporation (“Danaher” or the “Company”) without audit, pursuant to the rules and regulations of the Securities and Exchange Commission. In this quarterly report, the terms “Danaher” or the “Company” refer to Danaher Corporation, Danaher Corporation and its consolidated subsidiaries or the consolidated subsidiaries of Danaher Corporation, as the context requires. Unless otherwise indicated, all amounts in this quarterly report refer to continuing operations. Certain information and footnote disclosures normally included in financial statements prepared in accordance with accounting principles generally accepted in the United States have been condensed or omitted pursuant to such rules and regulations; however, the Company believes that the disclosures are adequate to make the information presented not misleading. The consolidated condensed financial statements included herein should be read in conjunction with the financial statements as of and for the year ended December 31, 2016 and the Notes thereto included in the Company’s Current Report on Form 8-K filed on June 19, 2017 and the 2016 Annual Report on Form 10-K filed on February 22, 2017 (collectively, the “2016 Annual Report”). In the opinion of the Company, the accompanying financial statements contain all adjustments (consisting of only normal recurring accruals) necessary to present fairly the financial position of the Company as of June 30, 2017 and December 31, 2016, its results of operations for the three and six-month periods ended June 30, 2017 and July 1, 2016 and its cash flows for each of the six-month periods then ended. Accumulated Other Comprehensive Income (Loss)—The changes in accumulated other comprehensive income (loss) by component are summarized below ($ in millions). Foreign currency translation adjustments are generally not adjusted for income taxes as they relate to indefinite investments in non-U.S. subsidiaries. (a) This accumulated other comprehensive income (loss) component is included in the computation of net periodic pension cost. Refer to Note 7 for additional details. (b) Included in other income in the accompanying Consolidated Condensed Statement of Earnings. Refer to Note 10 for additional details. New Accounting Standards—In May 2017, the Financial Accounting Standards Board (“FASB”) issued Accounting Standards Update (“ASU”) No. 2017-09, Compensation—Stock Compensation (Topic 718): Scope of Modification Accounting, which provided clarity on which changes to the terms or conditions of share-based payment awards require an entity to apply the modification accounting provisions required in Topic 718. The standard is effective for all entities for annual periods beginning after December 15, 2017, with early adoption permitted, including adoption in any interim period for which financial statements have not yet been issued. The Company does not expect the adoption of this ASU will have a material impact on its consolidated financial statements. In March 2017, the FASB issued ASU No. 2017-07, Compensation—Retirement Benefits (Topic 715): Improving the Presentation of Net Periodic Pension Cost and Net Periodic Postretirement Benefit Cost, which requires employers to disaggregate the service cost component from other components of net periodic benefit costs and to disclose the amounts of net periodic benefit costs that are included in each income statement line item. The standard requires employers to report the service cost component in the same line item as other compensation costs and to report the other components of net periodic benefit costs (which include interest costs, expected return on plan assets, amortization of prior service cost or credits and actuarial gains and losses) separately and outside a subtotal of operating income. The income statement guidance requires application on a retrospective basis. The ASU is effective for public entities for annual periods beginning after December 15, 2017, including interim periods, with early adoption permitted. Management has not yet completed its assessment of the impact of the new standard on the Company’s consolidated financial statements. In June 2016, the FASB issued ASU No. 2016-13, Financial Instruments—Credit Losses (Topic 326): Measurement of Credit Losses on Financial Instruments, which amends the impairment model by requiring entities to use a forward-looking approach based on expected losses to estimate credit losses on certain types of financial instruments, including trade receivables. The ASU is effective for public entities for fiscal years beginning after December 15, 2019, with early adoption permitted. Management has not yet completed its assessment of the impact of the new standard on the Company’s consolidated financial statements. In March 2016, the FASB issued ASU No. 2016-09, Compensation—Stock Compensation (Topic 718), simplifies several aspects of the accounting for share-based payment transactions, including the income tax consequences, classification of awards as either equity or liabilities, classification of certain items on the statement of cash flows and accounting for forfeitures. The Company has adopted this standard effective January 1, 2017. The ASU requires that the difference between the actual tax benefit realized upon exercise or vesting, as applicable, and the tax benefit recorded based on the fair value of the stock award at the time of grant (the “excess tax benefits”) be reflected as a reduction of the current period provision for income taxes with any shortfall recorded as an increase in the tax provision rather than as a component of changes to additional paid-in capital. The ASU also requires the excess tax benefit realized be reflected as operating cash flow rather than a financing cash flow. For the three and six-month periods ended June 30, 2017, the provision for income taxes from continuing operations was reduced and operating cash flow from continuing operations was increased by $7 million and $33 million, respectively, reflecting the impact of adopting this standard. Had this ASU been adopted at January 1, 2016, the provision for income taxes from continuing operations would have been reduced and operating cash flow from continuing operations would have been increased by $12 million and $26 million from the amounts reported for the three and six-month periods ended July 1, 2016, respectively. The actual benefit to be realized in future periods is inherently uncertain and will vary based on the price of the Company’s common stock as well as the timing of and relative value realized for future share-based transactions. In February 2016, the FASB issued ASU No. 2016-02, Leases (Topic 842), which requires lessees to recognize a right-of-use asset and a lease liability for all leases with terms greater than 12 months. The standard also requires disclosures by lessees and lessors about the amount, timing and uncertainty of cash flows arising from leases. The accounting applied by a lessor is largely unchanged from that applied under the current standard. The standard must be adopted using a modified retrospective transition approach and provides for certain practical expedients. The ASU is effective for public entities for fiscal years beginning after December 15, 2018, with early adoption permitted. Management has not yet completed its assessment of the impact of the new standard on the Company’s consolidated financial statements. clarified the guidance on certain items such as reporting revenue as a principal versus agent, identifying performance obligations, accounting for intellectual property licenses, assessing collectability, presentation of sales taxes, impairment testing for contract costs and disclosure of performance obligations. The Company plans to adopt the new standard on January 1, 2018 and expects the impact of the new standard on the amount and timing of revenue recognition to be insignificant. The new standard will require certain costs, primarily commissions on contracts greater than one year in duration, to be capitalized rather than expensed currently. The new standard will also require additional disclosure about the nature, amount, timing and uncertainty of revenue and cash flows from customer contracts, including judgments and changes in judgments and assets recognized from costs incurred to obtain or fulfill a contract. The Company expects to use the modified retrospective method of adoption, reflecting the cumulative effect of initially applying the new standard to revenue recognition in the first quarter of 2018. For a description of the Company’s acquisition activity for the year ended December 31, 2016 reference is made to the financial statements as of and for the year ended December 31, 2016 and Note 2 thereto included in the Company’s 2016 Annual Report. The Company continually evaluates potential acquisitions that either strategically fit with the Company’s existing portfolio or expand the Company’s portfolio into a new and attractive business area. The Company has completed a number of acquisitions that have been accounted for as purchases and have resulted in the recognition of goodwill in the Company’s financial statements. This goodwill arises because the purchase prices for these businesses reflect a number of factors including the future earnings and cash flow potential of these businesses, the multiple to earnings, cash flow and other factors at which similar businesses have been purchased by other acquirers, the competitive nature of the processes by which the Company acquired the businesses, avoidance of the time and costs which would be required (and the associated risks that would be encountered) to enhance the Company’s existing product offerings to key target markets and enter into new and profitable businesses, anticipated opportunities for synergies from the elimination of redundant facilities and staffing and use of each party’s respective, existing commercial infrastructure to cost-effectively expand sales of the other party’s products and services, and the complementary strategic fit and resulting synergies these businesses bring to existing operations. The Company makes an initial allocation of the purchase price at the date of acquisition based upon its understanding of the fair value of the acquired assets and assumed liabilities. The Company obtains this information during due diligence and through other sources. In the months after closing, as the Company obtains additional information about these assets and liabilities, including through tangible and intangible asset appraisals, and learns more about the newly acquired business, it is able to refine the estimates of fair value and more accurately allocate the purchase price. Only items identified as of the acquisition date are considered for subsequent adjustment. The Company is continuing to evaluate certain pre-acquisition contingencies associated with certain of its 2017 and 2016 acquisitions and is also in the process of obtaining valuations of certain property, plant, and equipment, acquired intangible assets and certain acquisition-related liabilities in connection with these acquisitions. The Company will make appropriate adjustments to the purchase price allocation prior to completion of the measurement period, as required. During the first six months of 2017, the Company acquired three businesses for total consideration of $94 million in cash, net of cash acquired. The businesses acquired complement existing units of the Life Sciences and Environmental & Applied Solutions segments. The aggregate annual sales of these three businesses at the time of their respective acquisitions, in each case based on the company’s revenues for its last completed fiscal year prior to the acquisition, were approximately $65 million. The Company preliminarily recorded an aggregate of $71 million of goodwill related to these acquisitions. In the first quarter of 2017, Danaher acquired the remaining noncontrolling interest associated with one of its prior business combinations for consideration of $64 million. Danaher recorded the increase in ownership interests as a transaction within stockholders’ equity. As a result of this transaction, noncontrolling interests were reduced by $63 million reflecting the carrying value of the interest with the $1 million difference charged to additional paid-in capital. In the six-month period ended July 1, 2016, unaudited pro forma earnings set forth above were adjusted to include the $23 million pretax impact of nonrecurring acquisition date fair value adjustments to inventory and deferred revenue primarily related to the 2016 acquisition of Cepheid. On July 2, 2016 (the “Distribution Date”), Danaher completed the separation (the “Separation”) of Fortive Corporation (“Fortive”). For additional details on the Separation reference is made to the financial statements as of and for the year ended December 31, 2016 and Note 3 thereto included in the Company’s 2016 Annual Report. The accounting requirements for reporting the Separation of Fortive as a discontinued operation were met when the Separation was completed. Accordingly, the accompanying consolidated condensed financial statements for all periods presented reflect this business as a discontinued operation. In connection with the Separation, Danaher and Fortive entered into various agreements to effect the Separation and provide a framework for their relationship after the Separation, including a transition services agreement, an employee matters agreement, a tax matters agreement, an intellectual property matters agreement and a Danaher Business System (“DBS”) license agreement. These agreements provide for the allocation between Danaher and Fortive of assets, employees, liabilities and obligations (including investments, property and employee benefits and tax-related assets and liabilities) attributable to periods prior to, at and after Fortive’s separation from Danaher and govern certain relationships between Danaher and Fortive after the Separation. In addition, Danaher is party to various commercial agreements with Fortive entities. The amounts billed for transition services provided under the above agreements as well as commercial sales and purchases to and from Fortive were not material to the Company’s results of operations for the three or six-month periods ended June 30, 2017. In the six-month period ended June 30, 2017, Danaher recorded a $22 million income tax benefit related to the release of previously provided reserves associated with uncertain tax positions on certain Danaher tax returns which were jointly filed with Fortive entities. These reserves were released due to the expiration of statutes of limitations for those returns. All Fortive entity-related balances were included in the income tax benefit related to discontinued operations. The Company has not identified any “triggering” events which indicate a potential impairment of goodwill in the six-month period ended June 30, 2017. Available-for-sale securities, which are included in other long-term assets in the accompanying Consolidated Condensed Balance Sheets, are either measured at fair value using quoted market prices in an active market or if they are not traded on an active market are valued at quoted prices reported by investment brokers and dealers based on the underlying terms of the security and comparison to similar securities traded on an active market. The Company has established nonqualified deferred compensation programs that permit officers, directors and certain management employees to defer a portion of their compensation, on a pretax basis, until at or after their termination of employment (or board service, as applicable). All amounts deferred under such plans are unfunded, unsecured obligations of the Company and are presented as a component of the Company’s compensation and benefits accrual included in other long-term liabilities in the accompanying Consolidated Condensed Balance Sheets. Participants may choose among alternative earning rates for the amounts they defer, which are primarily based on investment options within the Company’s 401(k) program (except that the earnings rates for amounts deferred by the Company’s directors and amounts contributed unilaterally by the Company are entirely based on changes in the value of the Company’s common stock). Changes in the deferred compensation liability under these programs are recognized based on changes in the fair value of the participants’ accounts, which are based on the applicable earnings rates. As of June 30, 2017 and December 31, 2016, available-for-sale securities were categorized as Level 1 and Level 2, as indicated above, and short and long-term borrowings were categorized as Level 1. The fair value of long-term borrowings was based on quoted market prices. The difference between the fair value and the carrying amounts of long-term borrowings (other than the Company’s Liquid Yield Option Notes due 2021 (the “LYONs”)) is attributable to changes in market interest rates and/or the Company’s credit ratings subsequent to the incurrence of the borrowing. In the case of the LYONs, differences in the fair value from the carrying value are attributable to changes in the price of the Company’s common stock due to the LYONs’ conversion features. The fair values of borrowings with original maturities of one year or less, as well as cash and cash equivalents, trade accounts receivable, net and trade accounts payable approximate their carrying amounts due to the short-term maturities of these instruments. For additional details regarding the Company’s debt financing, reference is made to Note 9 of the Company’s financial statements as of and for the year ended December 31, 2016 included in the Company’s 2016 Annual Report. Day Facility” and together with the Credit Facility, the “Credit Facilities”), to provide additional liquidity support for issuances under the Company’s U.S. dollar and euro-denominated commercial paper programs. Effective April 21, 2017, the Company reduced the commitment amount under the 364-Day Facility from $3.0 billion to $2.3 billion, and effective June 23, 2017, the Company further reduced the commitment amount under the facility to $1.0 billion, as permitted by the facility. As of June 30, 2017, no borrowings were outstanding under the Credit Facilities, and the Company was in compliance with all covenants under the facility. In addition to the Credit Facilities, the Company has also entered into reimbursement agreements with various commercial banks to support the issuance of letters of credit. As of June 30, 2017, borrowings outstanding under the Company’s U.S. dollar and euro-denominated commercial paper programs had a weighted average annual interest rate of negative 0.2% and a weighted average remaining maturity of approximately 55 days. The Company has classified approximately $3.7 billion of its borrowings outstanding under the commercial paper programs as of June 30, 2017 as long-term debt in the accompanying Consolidated Condensed Balance Sheet as the Company had the intent and ability, as supported by availability under the Credit Facility, to refinance these borrowings for at least one year from the balance sheet date. Debt discounts, premiums and debt issuance costs totaled $29 million and $25 million as of June 30, 2017 and December 31, 2016, respectively, and have been netted against the aggregate principal amounts of the related debt in the components of debt table above. On May 11, 2017, DH Japan Finance S.A. (“Danaher Japan”), a wholly-owned finance subsidiary of the Company, completed the private placement of ¥30.8 billion aggregate principal amount of 0.3% senior unsecured notes due May 11, 2027 (the “2027 Yen Notes”) and ¥53.2 billion aggregate principal amount of 0.65% senior unsecured notes due May 11, 2032 (the “2032 Yen Notes” and together with the 2027 Yen Notes, the “Yen Notes”). The 2027 and 2032 Yen Notes were issued at 100% of their principal amount. The 2027 and 2032 Yen Notes are fully and unconditionally guaranteed by the Company. The Company received net proceeds, after offering expenses, of approximately ¥83.6 billion (approximately $744 million based on currency exchange rates as of the date of the pricing of the notes) and used the net proceeds from the offering to partially repay commercial paper borrowings. Interest on the 2027 and 2032 Yen Notes is payable semiannually in arrears on May 11 and November 11 of each year, commencing on November 11, 2017. On June 30, 2017, DH Europe Finance S.A. (“Danaher International”), a wholly-owned finance subsidiary of the Company, completed the underwritten public offering of €250 million aggregate principal amount of floating rate, senior unsecured notes due 2022 (the “2022 Floating Rate Euronotes”) and €600 million aggregate principal amount of 1.2% senior unsecured notes due 2027 (the “2027 Euronotes” and together with the 2022 Floating Rate Euronotes, the “Euronotes”). The 2022 Floating Rate Euronotes were issued at 100.147% of their principal amount, will mature on June 30, 2022 and bear interest at a floating rate equal to three-month EURIBOR plus 0.3% per year (provided that the minimum interest rate is zero). The 2027 Euronotes were issued at 99.682% of their principal amount, will mature on June 30, 2027 and bear interest at the rate of 1.2% per year. The Euronotes are fully and unconditionally guaranteed by the Company. The Company received net proceeds, after underwriting discounts and commissions and offering expenses, of €843 million (approximately $940 million based on currency exchange rates as of the date of the pricing of the notes) and used the net proceeds from the offering to repay the €500 million aggregate principal amount of floating rate senior unsecured notes which matured on June 30, 2017 as well as to repay commercial paper borrowings. Interest on the 2022 Floating Rate Euronotes is payable quarterly in arrears on March 31, June 30, September 30 and December 31 of each year, commencing on September 30, 2017. Interest on the 2027 Euronotes is payable annually in arrears on June 30 of each year, commencing on June 30, 2018. The indenture under which the Euronotes were issued contains customary covenants, all of which the Company was in compliance with as of June 30, 2017. principal amount of the notes, plus accrued and unpaid interest and certain swap-related losses as applicable, in certain circumstances whereby such holder comes into violation of economic sanctions laws as a result of holding such notes. At any time and from time to time prior to March 30, 2027 (three months prior to the maturity date of the 2027 Notes), the Company may redeem the 2027 Notes, in whole or in part, by paying the principal amount and a “make-whole” premium, plus accrued and unpaid interest. In addition, on or after March 30, 2027, the Company will have the right, at its option, to redeem the 2027 Notes, in whole or in part, at any time and from time to time, by paying the principal amount plus accrued and unpaid interest. At any time and from time to time, the Company may redeem the Yen Notes, in whole or in part, by paying the principal amount and a “make-whole” premium, plus accrued and unpaid interest and net of certain swap-related gains or losses as applicable. The Company may also redeem the Euronotes and the Yen Notes upon the occurrence of specified, adverse changes in tax laws, or interpretations under such laws, at a redemption price equal to the principal amount of the notes to be redeemed. The €500 million of floating rate senior unsecured notes due in 2017 were repaid upon their maturity in June 2017. Danaher has guaranteed long-term debt and commercial paper issued by certain of its wholly-owned subsidiaries. The 2017 Euronotes, 2019 Euronotes, 2022 Euronotes, 2022 Floating Rate Euronotes, 2025 Euronotes and 2027 Euronotes were issued by Danaher International. The 2017 CHF Bonds, 2023 CHF Bonds and 2028 CHF Bonds were issued by DH Switzerland Finance S.A. (“Danaher Switzerland”), a wholly-owned finance subsidiary of the Company. The 2021 Yen Notes, 2027 Yen Notes and 2032 Yen Notes were issued by Danaher Japan. All securities issued by each of Danaher International, Danaher Switzerland and Danaher Japan are fully and unconditionally guaranteed by the Company and these guarantees rank on parity with the Company’s unsecured and unsubordinated indebtedness. During the six-month period ended June 30, 2017, holders of certain of the Company’s LYONs converted such LYONs into an aggregate of approximately two thousand shares of the Company’s common stock, par value $0.01 per share. The Company’s deferred tax liability associated with the book and tax basis difference in the converted LYONs was transferred to additional paid-in capital as a result of the conversions. During 2017, the Company’s cash contribution requirements for its U.S. and non-U.S. defined benefit pension plans are expected to be approximately $35 million and $40 million, respectively. The ultimate amounts to be contributed depend upon, among other things, legal requirements, underlying asset returns, the plan’s funded status, the anticipated tax deductibility of the contribution, local practices, market conditions, interest rates and other factors. The Company’s effective tax rate from continuing operations for the three and six-month periods ended June 30, 2017 was 13.6% and 15.4%, respectively, as compared to 36.1% and 30.2% for the three and six-month periods ended July 1, 2016, respectively. charges that are predominantly in the United States, which in aggregate decreased the reported tax rate by 6.9%. The effective tax rate for the six-month period ended June 30, 2017 reflects the aforementioned benefits recorded in the second quarter of 2017 and higher than expected benefits recorded in the first quarter of 2017 related to excess tax benefits from stock-based compensation, which in aggregate reduced the reported tax rate by 5.1%. The effective tax rate for the three-month period ended July 1, 2016 included charges related to repatriation of earnings and legal entity realignments associated with the Separation and other discrete items, which in aggregate increased the effective tax rate by 15.1%. The effective tax rate for the six-month period ended July 1, 2016 included these Separation charges in addition to the impact of a higher tax rate on the gain from the sale of marketable equity securities which in aggregate increased the effective tax rate by 9.6%. Tax authorities in Denmark have raised significant issues related to interest accrued by certain of the Company’s subsidiaries. On December 10, 2013, the Company received assessments from the Danish tax authority (“SKAT”) totaling approximately DKK 1.4 billion including interest through June 30, 2017 (approximately $222 million based on the exchange rate as of June 30, 2017), imposing withholding tax relating to interest accrued in Denmark on borrowings from certain of the Company’s subsidiaries for the years 2004-2009. The Company is currently in discussions with SKAT and anticipates receiving an assessment for years 2010-2012 totaling approximately DKK 853 million including interest through June 30, 2017 (approximately $131 million based on the exchange rate as of June 30, 2017). Management believes the positions the Company has taken in Denmark are in accordance with the relevant tax laws and is vigorously defending its positions. The Company appealed these assessments with the National Tax Tribunal in 2014 and intends on pursuing this matter through the European Court of Justice should this appeal be unsuccessful. The ultimate resolution of this matter is uncertain, could take many years, and could result in a material adverse impact to the Company’s financial statements, including its effective tax rate. Neither the Company nor any “affiliated purchaser” repurchased any shares of Company common stock during the six-month period ended June 30, 2017. On July 16, 2013, the Company’s Board of Directors approved a repurchase program (the “Repurchase Program”) authorizing the repurchase of up to 20 million shares of the Company’s common stock from time to time on the open market or in privately negotiated transactions. As of June 30, 2017, 20 million shares remained available for repurchase pursuant to the Repurchase Program. For a full description of the Company’s stock-based compensation programs, reference is made to Note 17 of the Company’s financial statements as of and for the year ended December 31, 2016 included in the Company’s 2016 Annual Report. As of June 30, 2017, approximately 73 million shares of the Company’s common stock were reserved for issuance under the 2007 Omnibus Incentive Plan. years. As of June 30, 2017, $145 million of total unrecognized compensation cost related to stock options is expected to be recognized over a weighted average period of approximately three years. Future compensation amounts will be adjusted for any changes in estimated forfeitures. The Company realized a tax benefit of $11 million and $49 million in the three and six-month periods ended June 30, 2017, respectively, related to the exercise of employee stock options and vesting of RSUs. As a result of the adoption of ASU 2016-09, Compensation—Stock Compensation, the excess tax benefit of $7 million and $33 million for the three and six-month periods ended June 30, 2017, has been recorded as a reduction to the current income tax provision and is reflected as an operating cash inflow in the accompanying Consolidated Condensed Statement of Cash Flows. Prior to the adoption of ASU 2016-09, the excess tax benefit was recorded as an increase to additional paid-in capital and was reflected as a financing cash flow. The Company received $265 million of cash proceeds from the sale of marketable equity securities during the first quarter of 2016. The Company recorded a pretax gain related to this sale of $223 million ($140 million after-tax or $0.20 per diluted share) during the six-month period ended July 1, 2016. For additional details regarding the Company’s restructuring activities, reference is made to Note 14 of the Company’s financial statements as of and for the year ended December 31, 2016 included in the Company’s 2016 Annual Report. During the three-month period ended June 30, 2017, the Company made the strategic decision to discontinue a molecular diagnostic product line in its Diagnostics segment. As a result, the Company recorded $76 million of pretax restructuring, impairment and other related charges ($51 million after-tax or $0.07 per diluted share). These charges included $49 million of noncash charges for the impairment of certain technology-related intangible assets as well as related inventory and property, plant, and equipment with no further use. In addition, the Company incurred $27 million of cash restructuring costs primarily related to employee severance and related charges. Substantially all restructuring activities related to this discontinued product line were completed in the three-month period ended June 30, 2017. For a description of the Company’s litigation and contingencies, reference is made to Note 16 of the Company’s financial statements as of and for the year ended December 31, 2016 included in the Company’s 2016 Annual Report. The Company generally accrues estimated warranty costs at the time of sale. In general, manufactured products are warranted against defects in material and workmanship when properly used for their intended purpose, installed correctly, and appropriately maintained. Warranty period terms depend on the nature of the product and range from 90 days up to the life of the product. The amount of the accrued warranty liability is determined based on historical information such as past experience, product failure rates or number of units repaired, estimated cost of material and labor, and in certain instances estimated property damage. The accrued warranty liability is reviewed on a quarterly basis and may be adjusted as additional information regarding expected warranty costs becomes known. Basic net earnings per share (“EPS”) from continuing operations is calculated by dividing net earnings from continuing operations by the weighted average number of common shares outstanding for the applicable period. Diluted net EPS from continuing operations is computed based on the weighted average number of common shares outstanding increased by the number of additional shares that would have been outstanding had the potentially dilutive common shares been issued and reduced by the number of shares the Company could have repurchased with the proceeds from the issuance of the potentially dilutive shares. For the three and six-month periods ended June 30, 2017, approximately four million options to purchase shares were not included in the diluted EPS from continuing operations calculation as the impact of their inclusion would have been anti-dilutive. For both the three and six-month periods ended July 1, 2016 there were no anti-dilutive options to purchase shares excluded from the diluted EPS from continuing operations calculation. The Company operates and reports its results in four separate business segments consisting of the Life Sciences, Diagnostics, Dental and Environmental & Applied Solutions segments. When determining the reportable segments, the Company aggregated operating segments based on their similar economic and operating characteristics. Operating profit represents total revenues less operating expenses, excluding nonoperating income and expense, interest and income taxes. Intersegment amounts are not significant and are eliminated to arrive at consolidated totals. There has been no material change in total assets or liabilities by segment since December 31, 2016. You should read this discussion along with the Company’s MD&A and audited financial statements as of and for the year ended December 31, 2016 and Notes thereto, included in the Company’s Current Report on Form 8-K filed on June 19, 2017 and the 2016 Annual Report on Form 10-K filed on February 22, 2017 (collectively, the “2016 Annual Report”) and the Company’s Consolidated Condensed Financial Statements and related Notes as of and for the three and six-month periods ended June 30, 2017 included in this Report. Unless otherwise indicated, all references in this report refer to continuing operations. Certain statements included or incorporated by reference in this quarterly report, in other documents we file with or furnish to the Securities and Exchange Commission (“SEC”), in our press releases, webcasts, conference calls, materials delivered to shareholders and other communications, are “forward-looking statements” within the meaning of the United States federal securities laws. All statements other than historical factual information are forward-looking statements, including without limitation statements regarding: projections of revenue, expenses, profit, profit margins, tax rates, tax provisions, cash flows, pension and benefit obligations and funding requirements, our liquidity position or other projected financial measures; management’s plans and strategies for future operations, including statements relating to anticipated operating performance, cost reductions, restructuring activities, new product and service developments, competitive strengths or market position, acquisitions and the integration thereof, divestitures, spin-offs, split-offs or other distributions, strategic opportunities, securities offerings, stock repurchases, dividends and executive compensation; growth, declines and other trends in markets we sell into; new or modified laws, regulations and accounting pronouncements; regulatory approvals; outstanding claims, legal proceedings, tax audits and assessments and other contingent liabilities; foreign currency exchange rates and fluctuations in those rates; general economic and capital markets conditions; the timing of any of the foregoing; assumptions underlying any of the foregoing; and any other statements that address events or developments that Danaher intends or believes will or may occur in the future. Terminology such as “believe,” “anticipate,” “should,” “could,” “intend,” “will,” “plan,” “expect,” “estimate,” “project,” “target,” “may,” “possible,” “potential,” “forecast” and “positioned” and similar references to future periods are intended to identify forward-looking statements, although not all forward-looking statements are accompanied by such words. Our growth could suffer if the markets into which we sell our products and services (references to products and services in this report also include software) decline, do not grow as anticipated or experience cyclicality. Certain of our businesses are subject to extensive regulation by the U.S. Food and Drug Administration and by comparable agencies of other countries, as well as laws regulating fraud and abuse in the health care industry and the privacy and security of health information. Failure to comply with those regulations could adversely affect our reputation and financial statements. The health care industry and related industries that we serve have undergone, and are in the process of undergoing, significant changes in an effort to reduce costs, which could adversely affect our financial statements. Our acquisition of businesses (including our recent acquisitions of Pall and Cepheid), joint ventures and strategic relationships could negatively impact our financial statements. Divestitures and other dispositions could negatively impact our business, and contingent liabilities from businesses that we have disposed could adversely affect our financial statements. We could incur significant liability if the 2016 spin-off of Fortive or the 2015 split-off of our communications business is determined to be a taxable transaction. Potential indemnification liabilities related to the 2016 spin-off of Fortive and the 2015 split-off of our communications business could materially and adversely affect our business and financial statements. A significant disruption in, or breach in security of, our information technology systems or violation of data privacy laws could adversely affect our business, reputation and financial statements. Our businesses are subject to extensive regulation; failure to comply with those regulations could adversely affect our financial statements and our business, including our reputation. Changes in tax law relating to multinational corporations could adversely affect our tax position. We are subject to a variety of litigation and other legal and regulatory proceedings in the course of our business that could adversely affect our business and financial statements. If we do not or cannot adequately protect our intellectual property, or if third-parties infringe our intellectual property rights, we may suffer competitive injury or expend significant resources enforcing our rights. The United States government has certain rights to use and disclose some of the intellectual property that we license and could exclusively license it to a third-party if we fail to achieve practical application of the intellectual property. Defects and unanticipated use or inadequate disclosure with respect to our products or services could adversely affect our business, reputation and financial statements. Certain of our businesses rely on relationships with collaborative partners and other third-parties for development, supply and marketing of certain products and potential products, and such collaborative partners or other third-parties could fail to perform sufficiently. Changes in laws or governmental regulations may reduce demand for our products or services or increase our expenses. International economic, political, legal, compliance, trade and business factors could negatively affect our financial statements. The results of the European Union membership referendum in the United Kingdom and their formal notice of withdrawal from the European Union could adversely affect customer demand, our relationships with customers and suppliers and our business and financial statements. See Part I—Item 1A of the Company’s 2016 Annual Report for a further discussion regarding reasons that actual results may differ materially from the results, developments and business decisions contemplated by our forward-looking statements. Forward-looking statements speak only as of the date of the report, document, press release, webcast, call, materials or other communication in which they are made. Except to the extent required by applicable law, we do not assume any obligation to update or revise any forward-looking statement, whether as a result of new information, future events and developments or otherwise. consummate and integrate appropriate acquisitions, develop innovative and differentiated new products and services with higher gross profit margins, expand and improve the effectiveness of the Company’s sales force, continue to reduce costs and improve operating efficiency and quality, and effectively address the demands of an increasingly regulated environment. The Company is making significant investments, organically and through acquisitions, to address the rapid pace of technological change in its served markets and to globalize its manufacturing, research and development and customer-facing resources (particularly in high-growth markets) in order to be responsive to the Company’s customers throughout the world and improve the efficiency of the Company’s operations. While differences exist among the Company’s businesses, on an overall basis, sales from existing businesses increased 2.0% during the second quarter of 2017 as compared to the comparable period of 2016. Increased demand for the Company’s products and services on an overall basis, together with the Company’s continued investments in sales growth initiatives and the other business-specific factors discussed below contributed to year-over-year sales growth. Geographically, year-over-year sales growth rates from existing businesses during the second quarter of 2017 were led by the high-growth markets. Sales from existing businesses in high-growth markets grew at a mid-single digit rate during the second quarter of 2017 as compared to the comparable period of 2016 led primarily by continued strength in China and India, partially offset by weakness in the Middle East. High-growth markets represented approximately 31% of the Company’s total sales in the second quarter of 2017. Sales from existing businesses in developed markets grew at a low-single digit rate during the second quarter of 2017 led primarily by growth in North America. The Company expects overall sales growth to continue for the remainder of 2017 and core growth rates to improve in the remainder of 2017 but remains cautious about challenges due to macro-economic and geopolitical uncertainties, including global uncertainties related to monetary, fiscal and trade policies. The Company regularly evaluates market needs and conditions with the objective of positioning itself to provide superior products and services to its customers in a cost-efficient manner. Consistent with this approach, during the three-month period ended June 30, 2017, the Company made the strategic decision to discontinue a molecular diagnostic product line in its Diagnostics segment. As a result, the Company recorded $76 million of pretax restructuring, impairment and other related charges ($51 million after-tax or $0.07 per diluted share). These charges included $49 million of noncash charges for the impairment of certain technology-related intangible assets as well as related inventory and property, plant, and equipment with no further use. In addition, the Company incurred $27 million of cash restructuring costs primarily related to employee severance and related charges. These restructuring charges are expected to result in annual savings in 2018 of approximately $40 million. On a year-over-year basis, currency exchange rates adversely impacted reported sales by approximately 1.5% for both the three and six-month periods ended June 30, 2017 primarily due to the strength of the U.S. dollar against several major currencies in the first six months of 2017 compared to 2016. If the currency exchange rates in effect as of June 30, 2017 were to prevail throughout the remainder of 2017, currency exchange rates would have a negligible impact on the Company’s estimated full year 2017 sales as the U.S. dollar is currently weaker in comparison with rates experienced in the second half of 2016 which would offset the negative sales impact reported in the first half of 2017. Any future strengthening of the U.S. dollar against major currencies would adversely impact the Company’s sales and results of operations, and any weakening of the U.S. dollar against major currencies would positively impact the Company’s sales and results of operations for the remainder of the year. the impact of currency translation. the period-to-period change in revenue (excluding sales from acquired businesses) after applying current period foreign exchange rates to the prior year period. Sales from existing businesses should be considered in addition to, and not as a replacement for or superior to, sales, and may not be comparable to similarly titled measures reported by other companies. Management believes that reporting the non-GAAP financial measure of sales from existing businesses provides useful information to investors by helping identify underlying growth trends in Danaher’s business and facilitating comparisons of Danaher’s revenue performance with its performance in prior and future periods and to Danaher’s peers. Management also uses sales from existing businesses to measure the Company’s operating and financial performance. The Company excludes the effect of currency translation from sales from existing businesses because currency translation is not under management’s control, is subject to volatility and can obscure underlying business trends, and excludes the effect of acquisitions and divestiture-related items because the nature, size, timing and number of acquisitions and divestitures can vary dramatically from period-to-period and between the Company and its peers and can also obscure underlying business trends and make comparisons of long-term performance difficult. Throughout this discussion, references to sales volume refer to the impact of both price and unit sales and references to productivity improvements generally refer to improved cost-efficiencies resulting from the ongoing application of the Danaher Business System. % Change Six- Month Period Ended June 30, 2017 vs. Operating profit margins were 15.2% for the three-month period ended June 30, 2017 as compared to 16.7% in the comparable period of 2016. Operating profit margins were 15.0% for the six-month period ended June 30, 2017 as compared to 16.2% in the comparable period of 2016. The Company’s Life Sciences segment offers a broad range of research tools that scientists use to study the basic building blocks of life, including genes, proteins, metabolites and cells, in order to understand the causes of disease, identify new therapies and test new drugs and vaccines. The segment, through its Pall business, is also a leading provider of filtration, separation and purification technologies to the biopharmaceutical, food and beverage, medical, aerospace, microelectronics and general industrial segments. During the first quarter of 2017, a product line was transferred from the Life Sciences segment to the Environmental & Applied Solutions segment. While this change is not material to segment results in total, the resulting change in sales growth has been included in the “Acquisitions and other” line in the table above. Price increases in the segment contributed 0.5% to sales growth on a year-over-year basis during both the three and six-month periods ended June 30, 2017, and are reflected as a component of the change in sales from existing businesses. Sales of the business’ broad range of mass spectrometers grew on a year-over-year basis during both the three and six-month periods ended June 30, 2017, led by strong sales growth in China and Western Europe, across the food, pharmaceutical and academic end-markets, partially offset by declines in demand in the clinical end-market in the United States. Sales of microscopy products grew on a year-over-year basis during the six-month period ended June 30, 2017, due primarily to increased demand in the high-growth markets. For the three-month period ended June 30, 2017, increased microscopy demand in the high-growth markets was partially offset by lower demand in North America, primarily in the medical and life science research end-markets. Demand for the business’ flow cytometry and genomics products was strong across all major product lines in both the three and six-month periods ended June 30, 2017 as compared to the comparable periods in 2016, due to strong demand in North America and China, partially offset by declines in demand in Japan. Demand for filtration, separation and purification technologies increased in both the three and six-month periods ended June 30, 2017 as compared to the comparable periods in 2016, primarily in the biopharmaceutical, medical and microelectronics end-markets. For these businesses, increased demand in the developed markets, particularly North America and Asia, was partially offset by declines in the Middle East, largely due to a major project in 2016 which did not repeat in 2017. The Company’s Diagnostics segment offers analytical instruments, reagents, consumables, software and services that hospitals, physicians’ offices, reference laboratories and other critical care settings use to diagnose disease and make treatment decisions.
2019-04-26T03:53:09Z
https://www.sec.gov/Archives/edgar/data/313616/000031361617000203/dhr-2017630x10q.htm
Let's begin by putting a DVD into the Apple DVD player, go to the Window menu, and choose the Show Infos menu. As soon as the DVD is inserted, normally a number of images come up, copyrights, it may ask you to choose in which language you want it read, or show film trailers. Usually, this finishes by showing a beautiful menu page, still or animated, with different options (buttons) which you can select using the remote control. Then the film begins. If you watch the info window during all these on-screen changes, you will see the title and chapter numbers change and sometimes disappear. What you need to understand is that when the number appears, you are seeing a title; when it doesn't appear, you are seeing a menu. This may seem obvious, but it isn't. You could easily watch a sequence that is a few minutes long without it being a title. Make a note of this as it will be important later. As you probably know, video sequences on a DVD are encoded as an MPEG 2 (occasionally as an MPEG 1). The MPEG 2 compression consists in cutting the sequence into more or less compressed blocks of images. There are three types of images. The I images (Intra) are independent and are created using a single JPEG compression. The P images (Predictive) allow a compensation of movement forward from the P or I images. The B images (Bidirectionally predictive) allow an interpolation forward and backward. I B B P B B P B B P B B / I . . . This sequence of images is called a Group of Picture (GOP), and the DVD Standard requires that the GOP always starts with an I image, which lasts about half a second. Once these restrictions have been followed, the soundtracks are added, then the subpictures and some other data of use to the DVD. The whole thing is called a VOBU (Video Object Unit), but since I find this term rather nondescript, I've rechristened it Object (OK, I admit, I'm not the first to have done this). An Object (as referred to in myDVDEdit), is a subset of a video sequence which is composed of a still image, the data permitting the calculation of the intermediary images, and the data useful for managing the DVD (including the navigation and subpictures). The Object lasts half a second. To put together a whole film, you need lots of Objects, and this is termed a Cell in a DVD. However, since you want to be able to cut the film up into lots of chapters, the Cells have been regrouped into a Program. Note, although a chapter is always a Program, a Program isn't necessarily a chapter. To obtain a complete film, lots of these Programs have been grouped together into what is called a PGC (Program Chain). Although the notion of a PGC is not considered by most people, it is an extremely important element in a DVD. A DVD is in fact nothing more than a succession of PGC along with a command language which allows them to be linked together. There are three types of PGCs, First Play PGC which runs at the start of the DVD, Title PGCs which is for the Titles, and Menu PGCs which is for the menus. In all three cases, the format is exactly the same, It's how they're used that differs. Now open the DVD directory to see what it contains. Be careful, because to do this, the DVD mustn't be protected. You must be the owner and have the legal right to do this ! If your DVD isn't on your hard drive, copy it over. This will accelerate the loading time of your resources. IFO (Information) files contain all the data that describe the DVD structure. BUP (Backup) files are exact copies of the IFO files, with the same names. They are there to replace the IFO files if they should become unreadable. Finally VOB (Video Object) files contain all the images, sounds, subtitles, etc., which will be played on your screen and speakers. The VOB file must never exceed 1GB. Now let's have a look at the names of these files. The VIDEO_TS.VOB file contains all the images and sounds used by VMG Menu PGCs. The VTS_xx_n files are the Video Title Set. A DVD contains sets of VTS, up to 99, so there can be VTS_01_n, up to .... VTS_99_n, bravo! Each VTS contains a number of titles, chosen by the designer. There could easily be 1 title in VTS_01, 5 titles in VTS_02 and 18 in VTS_03. Each title corresponds to a VTS Title PGC, but the VTS also contains VTS Menu PGCs. The VTS_xx_0.VOB files contains the images and sounds for the menus. They cannot exceed 1GB, so animated menus mustn't be too long. The VTS_xx_n.VOB files, where n>= 1, contain the images and sounds of the titles. Since a title can last over an hour, sometimes more than 2 hours, you often need lots of GBs to stock them. Since each VOB file mustn't exceed 1GB, the data is cut up into lots of pieces, numbered from 1 to 9. I can tell you're getting a bit lost here, so let's move on to something more 'hands on' and you'll start to understand. Select File menu, Open and then select the DVD you want to edit. Then choose the VIDEO_TS folder or its parent folder, either one will do. After a few seconds (the time needed to read all the IFO, files), a window appear. It's divided into five zones. 4. Zone 1 : The PGC selector. At the top you will see the IFO files. You will see from my example that there are two lines of VTS Menu 1, one for the English, one for the French. This is the main difference between the Menu PGCs and VTS PGCs (titles), there can be a different Menu PGC list for each language. At the bottom you will find the PGC list. Click on VTS 1, so as to see what a Title PGC looks like. • The VTSTitle number corresponds with the title number in the VTS. Make sure you don't confuse it with the title number in the DVD. • The PGC duration is expressed in hours/minutes/seconds/frames. Frames are a subdivision of a second well known to those who work with video. All you need to know is that on a DVD, there are 25 frames a second in PAL mode and 30 frames a second in NTSC. • The Number of angles: A DVD can manage up to nine angles. • The Number of soundtracks: (Audio stream) there can be up to eight soundtracks. • The Number of subpictures: I use the term subpictures because although subpictures are used to generate the subtitles of a film, they are also used for other things, for example to show the selection of a button in a menu or to make white rabbits appear in the middle of a film. A DVD can manage up to 32 subpictures. • The Number of Pre, Post, and Cell Commands: The Commands are instructions executed by the DVD player. These instructions contain the intelligence of a DVD. It allows you to modify the internal registers, to make tests and execute all kinds of procedures depending on the results. The Pre commands are executed when the PGC begins, the Post commands are executed when the PGC ends, and the Cell Commands are executed when the display of a Cell ends, if the DVD designer requires it to. The total number of Pre commands, Post commands and Cell Commands cannot be more than 128. If, at the top of zone 1, you select Titles instead of All PGCs, you will see a complete list of the PGC titles on the DVD. You will no longer have access to First Play, nor to the Menu PGC lists. The display is slightly different since, for each PGC, it shows the corresponding title number as well as the number of the VTS file where the title is found and also the PGC number in the VTS. For example, if you have Title 5 - VTS 2 - Pgc 4, this means that the PGC of title 5 can be found in file VTS_02_0.IFO and that it's the PGC number 4 in that file. If you select a title in that list and you return to the All PGCs mode, you will see the same PGC selected. In zone 2 all the data on the DVD is grouped together. There are three types: DVD data, IFO data and PGC data. The selector to the left of the zone allows you to choose what type of data is displayed. DVD allows you access to general DVD parameters, such as the Provider ID, or which regions the DVD is authorized for. IFO allows access to the general parameters for First Play, the VMG Menu, the VTS Menu or the VTS (Titres), according to the choice made at the top of zone 1. PGC allows you access to the PGC parameters selected at the bottom of zone 1. • DVD version: format version of the DVD, normally 1.1, and usually there isn't any reason to change this. • Volume/Side: the volume number and the side of the DVD. • Provider ID: the name of the DVD's designer, or the production house. • Authorized Regions: Section indicating the geographical region where the DVD is authorized to play. The world is cut into 6 regions, each region carrying a number from 1 to 6. This is so as to protect the rights of the film distributors and avoid that a film that has been released early in one region, cannot be seen in another. The DVD players sold in each of these regions are made so that they can only read DVDs authorized in their region and prohibit any others. By modifying these options, you are indicating in which regions the DVD is authorized and those for which you prohibit it. Region 7 is not currently available and region 8 is allocated for planes, cruise ships and hotels. Not implemented. In a future version this tab will allow you to modify the parental control parameters on the DVD. Not implemented. In a future version, this tab will allow you to configure the text on the DVD. The main use of these parameters is to give the DVD a name which could be read in some players, but it has even greater possibilities since you could give a name to each title, each chapter and to other pieces of information. In practice, very few DVDs contain this information and even fewer players are capable of exploiting them, being used instead almost exclusively for Karaoke DVDs. One further important DVD notion not yet discussed is that of the notion of domain. Each domain contains 1 or more PGCs. In fact, this corresponds to each line of the table at the top of zone 1 except when there are lots of languages for the VTS Menu, all of which form part of the same domain. There are four types of domains: the First Play, the VMG Menu, the VTS Menu and the VTS (titles). The VIDEO_TS.IFO file contains the First Play domain and the VMG domain, while each VTS_xx_0.IFO file contains the VTS Menu domain and the corresponding VTS domain. Each domain has its own Video, Audio and Subpictures parameters, except for the First Play which can't display any film. It's important to note the close link between these parameters and those contained in the MPEG stream of the films. If you position the parameter to a value which doesn't correspond to an MPEG stream, the result is uncertain, some players read the IFO information to display the film while others only take the MPEG stream parameters into account. It is also worth noting that all these parameters concern the totality of any given domain. If the domain contains lots of videos, they must all be encoded in the same way. It would be impossible for example to have a film in PAL, and one in NTSC in the same domain, or a film in 4:3 and one in 16:9. You are obliged to put them into separate domains. Since the whole of the VTS Menu for any given VTS file is part of the same domain, there can't be any parameter specific to one language. For instance, if the VTS Menu 2 French has two audio tracks, the first one with 2 channels and the second one with 6 channels, is would necessarily have to be the same for the VTS Menu 2 English or the VTS Menu 2 German. The Language parameter for the VMG Menu and the VTS Menu, is an exception to the rule because it isn't part of the domain parameters, instead it belongs to a sub-division of a domain known as an LU (Language Unit) which allows different menus to be displayed depending on their language. Encoding type MPEG-1 or MPEG-2, following the PAL or NTSC standards, following a specific resolution and compression in VBR (Variable Bit Rate) or in CBR (Constant Bit Rate). All these attributes must correspond to real video stream encoding parameters. There remains the Aspect attribute. This tells the DVD player how it has to display the image on screen. Five possible values are: 4:3, 16:9, 16:9 auto pan&scan, 16:9 auto letterbox and 16:9 auto pan&scan and letterbox. Let's leave aside the last case which we will examine later. Notice that whatever the original aspect ratio of an image (16:9 or 4:3), it records to the DVD in one of 8 possible resolutions (not counting the MPEG1), 4 for PAL and 4 for NTSC. In 99% of cases, the resolution used is 720x576 in PAL and 720x480 in NTSC, but in all cases, the picture is recorded distorted. When the DVD player needs to know how to display a picture, it reads the Aspect attribute, the picture will then be distorted or truncated depending on the attribute. The following table shows the different outcomes according to the aspect ratio of the original picture, the screen format, and the value of the Aspect attribute. To recap: if your original picture is in 4:3, you have to put the attribute on 4:3 so that it doesn't appear distorted on a 16:9 screen, if your original picture are in 16:9, it will be truncated in pan&scan mode on a 4:3 screen and reduced in letterbox mode. And the 16:9 auto Pan&Scan and letterbox mode, I hear you say? In fact, this corresponds to one of these last two modes, and this is chosen according to the preference entered by the user of the DVD player. A DVD can manage up to 8 audio tracks. Don't confuse audio tracks with the audio stream integrated to the MPEG data (see further on, the audio attributes of the PGC). Each track is numbered from 0 to 7. It's this number which is used in the commands to designate the audio track to be used. To add a new track, you just have to click on , or to delete one, just click on . The track order can be modified using drag and drop by clicking on the audio track, holding the mouse down, and moving it to the required position. The track order corresponds to the order in which they are selected each time the user pushes on the Audio button of their remote control. By default, only the main attributes of the audio tracks are visible. In order to make all the attributes appear, you have to click on the small triangle just next to the track number. All these parameters are directly linked to the audio stream, so you have to use their true values. The most useful parameter is Language which corresponds to the name of the language which is displayed when the user pushes the Audio button on the remote control. A DVD can manage up to 32 subpictures tracks, except for the menus where there is only one. Once again, don't confuse subpicture tracks with subpicture stream. Each track is numbered from 0 to 31 and the number is used in the commands to designate which subpicture track to use. The track order can be modified using drag and drop. The track order corresponds to the order in which they are selected each time the user pushes on the subtitle button on the remote control. Language allows you to indicate which language the subtitle corresponds to. This is because although subpictures aren't just used for subtitles, it is their principle use. Type designates the use to be made of the subpicture when it is a subtitle. Encoding. Designates the coding algorithm of the subpictures. I only know the 2-bit RLE. If you ever come across a DVD with a Extended encoding, please let me know about it. This gives access to all the parameters of the PGC selected in the lower part of zone 1. Here you have the general parameters of the PGC. PGC Entry: This parameter depends on the type of PGC. In the VMG Menu, you can indicate the PGC called Title Entry. The PGC Title Entry is the PGC executed when the user pushes on the Title button on his remote control. There can only be one single PGC Title Entry. For the VTS Menu, it allows you to indicate if the PGC is a specific entry. - root: the PGC is executed when the user pushes on the Menu button of the remote control. - subpicture: the PGC is executed when the user pushes on the Subtitles Menu button on the remote control. This button is present on only a very few players. Not to be confused with the Subtitle button. - audio: the PGC is executed when the user pushes on the Audio Menu button of the remote control. This button is present on only a very few players. Not to be confused with the Audio button. - angle: the PGC is executed when the user pushes on the Angle Menu button of the remote control. This button is present on only a very few players. Not to be confused with the Angle button. There can only be one PGC of each type per VTS Menu. Apart from the root entry, the others aren't often used. In fact, when they are, it's so as to easily access a PGC in the commands, the keys are then forbidden using the Prohibited User Options (see further on). For the VTS (title), the PGC entry of a title can be shown. Sometimes a title can be over many PGCs, when that's the case, the PGC entry indicates which one will be executed when the title starts. Pause: Allows it to stop on the last image of the PGC. The pause can be infinite, in this case, pushing any button on the remote control will cause the player to continue reading immediately, or within 1 to 254 seconds. Sequential: the Cells are read in that order. Random: The Cells are read in a random fashion. The same Cell can be seen many times. Shuffle: The Cells are read in a random fashion but no Cell will be seen a second time until all the Cells have been seen. Audio Parameters: Configuration of the audio tracks used by the PGC and selection of the corresponding audio stream. Subpictures Parameters: Configuration of the subpictures tracks used by the PGC and selection of the corresponding subpictures stream. NextPgc: Number of the PGC started with the command Link NextPgc, or for titles recorded onto many PGCs, when the player has to go to the next PGC. This parameter only appears in VTS domains. PrevPgc: Number of the PGC started with the command Link PrevPgc, or for titles recorded onto many PGCs, when the player has to go to the previous PGC. This parameter only appears in VTS domains. GoUpPgc: Number of the PGC started with the command Link GoUpPgc, or when the user pushes on the GoUp button (also called Return) on the remote control. In the VTS Menu domain, GoUpPgc can also execute a Resume, which will cause the return to the last title being read. But why this selection of audio tracks and subpictures: it's a possibility which is used frequently. Look at films which offer a bonus audio commentary by the director. The commentary is in fact just a specific audio track. The VTS (title) of the film will be composed of two PGCs, the first one will have all the selected audio tracks except for the commentary, the second will only have the commentary selected. You're going to tell me that if there are two PGCs, the film will take up twice the amount of space. No it won't, because the PGCs only provide references to the video stream. The two PGCs reference the same video stream. This is definitely the most important tab, showing the list of cells in a PGC. Cell: The number of the cell. Title: The number of the title and the chapter corresponding to that cell. Prgm: The number of the program, when the cell is also the beginning of a program. Remember that if all titles/chapters correspond to a program, a program isn't necessarily the beginning of a chapter. Angle: The number of the angle. This only appears with multiangle titles. Playback time: Duration of the cell, in hours, minutes, seconds, frames, followed by the number of frames per second (25 for PAL, 30 for NTSC). Time/Still Time: Clock, when the player arrives at the end of the cell, or when there's a pause, the amount of time when the last frame of the cell will remain visible. This option is often used in slideshows, or to display a copyright. Infinite pause means that the player will stay on the last frame, awaiting for the user to push one of the remote control buttons. This option is usually used to display fixed menus. VobId: Objects reference. In other programs, you will usually find two numbers, the VobId and the CellId. I found it easier to group it under one name. The IFO file contains the Cell Address Map table used to find each object address in the VOB files from the VobId. There are many reasons for this indirectness. It allows you to access the same Objects in two different cells. So for example you can have two PGC presenting exactly the same film, thus with the same VobId, ut with a forced subpicture stream or with different commands. The second reason is to allow the non-sequentiality of the Object in the VOB files. When the PGC is multiangled, during the making of the VOB files, the objects are not recorded by placing all the objects with angle 1, then all the objects with angle 2, etc. Instead, it places first one object with angle 1, then one object with angle 2, and then it starts again. This allows it to go much quicker in finding an object when the angle changes. Command: The Cell command to execute at the end of the cell, the number corresponds to the number of the command line which must be executed in the Cell Commands. The small triangle next to the cell number allows you to configure it. Pause: Allows you to configure a pause at the end of the cell. Command: Let's you select the command to be executed at the end of a cell. The command must be recorded beforehand in the Cell Commands. Seamless playback: If playback is interrupted it would mean that the player had stopped decoding the MPEG stream, and had emptied the memory ready to start a new decoding. Because many tenths of a second are needed to display a new image, interrupted playback is very visible if it happens in the middle of a film. Thus it must be avoided if at all possible. However, there are certain instances where interruption is inevitable, even indispensable. Inevitable when for instance the DVD player needs to execute a command. In fact, the DVD Forum has imposed the interruption of a film's playback in order for a command to be executed, whether it's a Post-Command, an order to change the PGC, or a Command Cell. Thus you can't use a Command Cell to choose the next Cell without it being visible, unless you go through a black screen. Indispensable also when you change the DVD layer. Because changing layer can only happen if the MPEG decoder has stopped. See the note below about changing layer. STC (System Time Clock) Discontinuity: The DVD player has a clock system called STC (System Time Clock) which permits the synchronisation of the decoding of images from the sounds or subpictures, then their display on the screen. This parameter informs the player that there is a discontinuity of the STC, in other words the value of the STC at the end of the last frame of the preceding Cell no longer corresponds with the value of the STC of the first frame of the new Cell. The DVD player must therefore resynchronise the STC. There is often an STC discontinuity at the start of every Chapter, and even every Cell. Restricted Access: DVD players allow the possibility of fast forwarding or playing back a picture, at variable speeds. When the player comes across a Cell marked Restricted Access, it immediately returns to normal speed. I, personally, have never come across a DVD using this function. Interleaved Objects: As explained in chapter 2, a film is constituted from many objects, also called VOBU. Normally, these objects are recorded in the .VOB files, one after another, but not always. Let's examine the case where you want to have multi-angles. If the angles are of large size, when you change the angles, the DVD player's laser will have to carry out a displacement which will take a long time. If the time is too long, the memory buffer of the player risks rupturing data and the player might interrupt the MPEG decoder, or even display an error. To avoid this, the Objects are placed alternating. This way, the passage from one angle to another is almost instantaneous and without interruption during playback. The Interleaved Objects parameter signals to the DVD player that the objects of that Cell are mixed with the objects of another Cell. When a DVD contains too much information for one layer, you can use a dual layer DVD. The first layer is called Layer 0 and the second one Layer 1. A DVD is made up of data recorded on a unique spiral which goes from the centre of the disc outwards to the exterior of the disc. When there is a second layer, this one can either begin from the exterior of the disc and go inwards to the centre, this is called OTP (Opposite Track Path), or go from the centre of the disc out towards the exterior and this is called PTP (Parallel Track Path). The OTP mode is systematically used for video DVDs so as to reduce the time taken to change layer. The PTP mode is most often used for DVDs containing computer data, since each layer can be treated separately, a bit like the two sides of a record. Correct positioning of the moment when you change layer. In fact, in OTP mode, Layer 1 must obligatorily be equal to or less than Layer 0. The moment of layer change must be placed at the start of a Cell which must be marked Seamless Playback: No. If you modify a DVD by deleting objects placed before the change Layer point, it can happen that the size of Layer 0 becomes smaller than Layer 1. Then it will no longer change Layer. In a future version of myDVDEdit, you will be able to correct this type of problem. These are the colors used to draw the subpictures (I should mention in passing that buttons are also subpictures). There is the possibility of saving a color palette by using the Save as... option in the Presets control. These saved palettes can be used at any time. You can also use one of the predefined palettes. (R, G, B) shows the red, green, blue component of each color. (Y, Cr, Cb) shows the luminance, the red chrominance and the blue chrominance of each color. (R%, G%, B%) shows the percentage red, green and blue of each color. You can modify the type of display mode whenever you wish. You can also modify the type of default display mode within the preferences in myDVDEdit. Pre Commands. Commands executed at the start of the PGC PGC. The number in brackets, in the name of the tab, gives you the number of Pre-commands. Post Commands. Commands executed at the end of the PGC, that is, when the last Cell of the PGC is finished and that no Cell Command is affected by it. The number in brackets, in the name of the tab, gives you the number of Post-commands. Cell Commands. Commands executed at the end of a Cell, if it is programmed to do so. If you want to execute a command at the end of a particular Cell, you first have to create the command in that tab, then go into the Cells tab to modify the desired Cell. This tab shows the Prohibited User Options. For example when the Stream Change - Audio option is checked , then this prohibits the user from changing the audio track in mid-playback. At the Title level: The options, Time Play or Search and Chapter Play or Search, can be prohibited at the Title level. If the title uses lots of PGCs, all those PGCs will have the same Title level Prohibited User Options. At the object VOB level: These options will be prohibited during the whole playing of that object. The current version of myDVDEdit does not allow you to modify the Prohibited User Options at the VOB level. Only one option needs to be prohibited on one of these levels for the DVD player to prohibit it. myDVDEdit allows you to modify your commands. To do this, click on the small triangle to the left of the command and the editor will appear; you can then modify it completely. It's a contextual editor, which means that the available commands and their possible values for any command will depend on the type of PGC, and the available resources. If you want to make a Jump Title/Chapter, only the available titles will be proposed and only the available chapters for a title will be shown. You can add a new command by clicking on the [+] button located at the bottom of the window, or by using the [+] key when the command table is selected. If a command is selected in the table, the new command is inserted just beneath it. If many commands are selected in the table, the new command is inserted just after the last selected command. If no commands are selected, the command is added at the end of the table. You can delete one or more commands by selecting them and then clicking on the [-] button located at the bottom of the window, or by pushing the [-] key or backspace. You can modify the position of a command using the mouse to click on the command and then dragging it to the desired position. You can Cut, Copy, and Paste one or many commands by selecting them and then calling up the corresponding command menu. All these modifications to the commands can be Undone or Redone with the corresponding command menus. These modifications won't be saved to disk until the File/Save menu has been called up. The Goto Adjusted option allows the automatic modification of all Goto line addresses during insertion, deletion, or move of one or more commands. If you select 2 and click on the [+] button with the Goto Adjusted option on, the Goto of line 1 will be corrected. The command number still available is shown at the bottom of the window. For each PGC, the total number of pre commands, post commands and cell commands cannot exceed 128. I am sorry to tell you that I can't give you the complete documentation of DVD commands. This will be the subject of another documentation when I have time to write it. I suggest you visit the following English language sites (dvdinfo and dvd-replica) which will allow you to know more about these well-known commands. Zone 3 allows the visualization of objects in the selected cell within the Cells tab. Here, you will see the control buttons that enable you to move to the previous or the next objects, a slider to select the exact object required, and the display screen of your objects (presumably you've already understood this). Located at the top left, above the screen, is the ratio indicator (16:9 or 4:3). If you click on it, you will be able to see your film in either type of screen. Often, in a DVD, the images are recorded in 16:9 but are compatible with 4:3, which means that when you go into 4:3, the image goes into LetterBox mode. You will find this information in the IFO tab under the term 16:9 auto letterbox aspect. Now if you look a Menu PGC, you will see that it is usually in 16:9 auto pan&scan. This means that when the screen is in 4:3, the image is not reduced, but the left and right sides of the image are truncated. Just next to it is a small selector for choosing between, displaying the DVD screen, or displaying the data of the object. I won't go into the significance of these parameters here and I've merely mentioned them for information purposes. For those who wish to know more, look at the following documentation MPEG-2, ISO/IEC 13818-2, but you'll have to persevere with it, it's not simple. At the bottom left of the screen is the frame time of the object. This can be displayed or not by changing the option in preferences. Next to it, the state of the GOP is displayed (as defined in chapter 2). signifies that the GOP is closed and that it is open. This display can be hide by modifying the option in preferences. A GOP is said to be closed when it is not dependent on the previous GOP. When it's open, on the contrary, the DVD player needs the last frame of the previous GOP in order to construct the first frame of the new GOP. This information is important when you want to cut part of a film. When you cut it at the level of an open GOP, it's necessary to close it, which usually causes the loss of 1 or 2 extra frames. Above the image, you'll find a recap of the position of the object selector. If you put the mouse on it and wait for 2 seconds, the information will appear with the exact position on the disc of that object: the file name, the starting position of the object in that file, and the sector. A sector is 2048 bytes. An object contains a mixture of soundtracks, subpictures, and video. In fact, they aren't actually mixed up at all but are multiplexed. In other words one has, for example, 4 sectors of video, 2 sectors of audio stream 0, then 2 sectors of audio stream 1 and then 6 new sectors of video. The whole lot allows for different decoders to be working at the same time (the MPEG decoder, the audio decoder, the subpicture decoder) and never be short of data. Since you now know that an object is always an offset multiple of 2048, all the tables to find it with will provide not its offset but its sector. This means that the values are a lot less important. First of all, what is a subpicture ? Literally it means a subimage, but in fact we should really talk about over-images because the image comes on top of the film. The subpictures are multiplexed with the video stream and the audio's specific stream (for the experts: private stream 1, id 0x20 to 0x3F). 0 : Background color (or B). 1 : Pattern color (or P). 2 : Emphasis 1 color (or E1). 3 : Emphasis 2 color (or E2). The subpicture stream also contains a table for each of the values corresponding to the color in the PGC Palette and a contrast. The contrast is a value from 0 to 15, where 0 is transparent and 15 is opaque. All the values between 0 and 15 therefore correspond with a level of transparency. Whatever the resolution of the film and the display mode (16:9 or 4:3), the subpicture always has a resolution of 720x480 in NTSC, and 720x576 in PAL, but usually only a part of the image is coded as the rest of the image is transparent. All this could easily have been encoded with just a few bytes, but that would be underestimating the overflowing imagination of the inventors of the DVD. Instead, they created SP_DCSQ, or if you prefer, the SubPicture Display Control SeQuence. A time delay: The time to wait before executing the commands. A pointer to the next SP_DCSQ, or points to itself if there isn't another SP_DCSQ. End : Ends the SP_DCSQ. Start Display : Displays the subpicture. Forced Start Display : Forced display of the subpicture, used for menus or to force the display of a subtitle, even if no subtitle is activated (to understand an alien, for example). Stop Display : Fades out the subpicture. Set Color : A command followed by the number of the colors to use for th B, P, E1 and E2 pixels. Set Contrast : A command followed by the values for the contrast of the B, P, E1 and E2 pixels. Set Display Area : Defines the display area of the subpicture. Set Pixel Data Address : A command followed by two offsets, the first one to define the pixels on odd lines, the second one to define the pixels on even lines. There are two pixel tables because the DVD is usually meant for television, which displays odd lines first, then the even lines (the well-known interlacing). This is easier for the decoder. Change Color and Contrast : This command is rarely used. It allows you to define the zones where the colors and contrasts will be different. As we have already seen, there are four possible values per pixel, so normally, there can't be more than four different colors in a subpicture (and usually only three since there has to be a value for transparency, usually in the Background color). This command allows you to have a lot more by dividing the image into zones where the colors and contrasts are different. This is, however, limited, because the image is cut into vertical bands of varying width and each band can be cut horizontally without there ever being more than 16 zones. The first SP_DCSQ must contain at the very least : Set Color, Set Contrast, Set Display Area and Set Pixel Data Addresses. The total size (including the commands) of the subpicture data cannot exceed 53220 bytes. Select a VTS, and choose one with preferably lots of subtitles. Select Informations of the PGC tab. You should then see the list of your subpictures (lines with an icon of a screen with subtitles on it). With subpictures used to display subtitles, there's a strong possibility that it will show the name of the language corresponding to the subtitle language, and there should also be the subpicture stream number for 16:9 and also one (often the same) for the letterbox. Position the object selector until the information appears in the Subpictures tab. In this window, you will recognize the SP_DCSQ under the name Sequence, followed by the time to wait before executing the commands. There is also a list of the commands described previously. You will never see the commands End and Set Pixel Data Addresses. Warning: The Change Color and Contrast command has not yet been implemented in myDVDEdit. If your DVD uses it, the image rendered will be incorrect. The title of the window indicates the subpicture stream number. By finding this number in the Informations tab on the PGC window, you can find which language it corresponds with. Under the window on the right is a numerical value. This indicates the number of subpicture stream in the selected object. On the left is a stepper (a small control with an arrow pointing up and one pointing down). This allows you to select the stream displayed in the Subpictures tab when there is more than one.. Beneath the window is a pop-up menu which allows you to select what you want to be displayed on-screen. Hide all subpicture streams : The subpictures no longer appear on-screen. Show all subpicture streams : This shows the subpicture selected in the tab which appears on-screen. Show subpicture stream n : This shows the selected subpicture, and only this one, which is displayed on-screen. I know you'll say I'm repeating myself when I remind you that subpictures are also used to make buttons, but let's see how. Button definitions are placed in a special packet called PCI (Presentation Control Information) which is always in the first sector of each Objet (remember, first sector has 2048 first bytes). Whether it contains buttons or not, this packet is always present. There is a table which can contain the definitions for up to 36 buttons. The position of the rectangle used as the action zone of the button in the image. A table number from among three possibilities giving the colors and contrasts when the button is selected, as well as the colors and contrasts for when the button is activated. An Auto action flag. If this option is used, the button becomes active the moment it is selected to avoid the user having to press ENTER. The number of the next buttons when using remote control keys to navigate with. One command (yes, a single command). This is the command executed when the button is activated. First of all, a subpicture is used to display the buttons in their normal state (not selected, not activated). When the button is selected, the part of the subpicture as defined by the button rectangle is redrawn without the colors and contrasts defined in the subpicture commands, but with those from the table of selected button colors. The same is true for when the button is activated (when the user pushes ENTER) except that it uses the colors and contrasts from the table of the activated mode colors. Is that any clearer? If not, then take another look at myDVDEdit. Select a VTS Menu. Choose a PGC, it must have at least one subpicture. As soon as myDVDEdit detects an object with a button, the Button tab is displayed. Note, if you want to see the button on-screen, you have to position the mouse on the object containing the button subpicture. For now, forget the upper part of the Buttons tab. In the bottom part, you will see for example, Button 1/5. This means that button number 1 is selected out of a total of a possible 5 buttons. The stepper next to this allows you to choose the button you want to select. You can also select a button by clicking on it on-screen. Just below, you will find the definition of the selection zone of the button. To the right you will see the number of the selected buttons when the user pushes on LEFT, RIGHT, UP, or DOWN with the remote control.. below , is the color table number used. To the right is the Auto action option. Finally, right at the bottom, is the command that will execute when the button is activated. When a button is not selected, it's drawn using the normal colors of the subpicture. As soon as you select it, and it becomes active, it will take the colors from the color table of the corresponding mode. Suppose the color table indicates 1. Click on the Color tab. Selection 1 will show you the colors and contrasts it uses to redraw the subpicture when the button is selected. When the button is activated, it will use the colors and contrasts in Action 1. OK, is that clearer now? I told you about a table which can contain the definitions for 36 buttons. Therefore, you think that you can have 36 buttons on screen. Well,...yes and no. When an image is in 4:3, that's no problem, whether the screen is in 16:9 or 4:3, the image and the subpicture display in 4:3. However, when the original image is shot in 16:9, there are three possible outcomes. Image in 16:9 on a 16:9 screen. The image isn't changed. Image in 16:9 on a 4:3 screen displayed in letterbox. The image is reduced. Image in 16:9 on a 4:3 screen displayed in pan&scan. The image is truncated. Whatever the mode used, the images are always in 720x480 in NTSC or 720x576 in PAL. Whether the screen is in 16:9 or 4:3, the subpicture will always occupy the whole screen. You understand by now that the position of the button absolutely cannot be the same in all modes since the film and the subpicture cannot undergo the same deformation. In the top part of the Buttons tab, you'll find the Group selector. There can be one, two, or three possible options depending on the film characteristics. By selecting each of these options, you will see the characteristics of each button in the corresponding mode. Since the Button table can only contain 36 entries, if there is only one option you could have 36 buttons, but with two options you can only have 18 buttons, and with three options (rarely used) you can only have 12 buttons on your screen. Normally, a button defined in one mode will be the same in the other modes, but theoretically nothing should stop you from having a button which is different in different display modes. Force Select Button: Allows you to automatically force the selection of a button after a specific time. Force Action Button: Allows you to automatically force the activation of a button after a specific time. Delay: The time to wait before a button is automatically selected or activated. Frames: Allows the display of the button selection rectangle. Numbers: Allows the on-screen display of each button number so as to locate them more easily. Select all: Allows the display of all the buttons in their selected state. Note that the buttons are drawn with the color tables for a current button. Buttons which don't use the same table will not be displayed correctly in this mode. If you double-click on the command of a button, or in the selection zone of an on-screen button, and that command is one that moves you to another PGC/Cell/Program, then this new PGC/Cell/Program will be displayed. The first important element in this zone is the mode selector for the functioning of myDVDEdit. When the cursor is positioned to the left , myDVDEDit is in Edit mode. When the cursor is positioned to the right , myDVDEdit is in Debug mode, which we will look at in chapter 9. In Edit mode, the first table in zone 5 displays the size in bytes of different elements of the DVD. Dvd: This is the total size of the DVD. To fit on a single DVD, it must be less than 4.707.319.808 bytes, but you have to leave a small margin of error since formatting a DVD uses a good few thousand bytes. Pgc : size of the current PGC. Cell : size of the current Cell. Obj : size of the current object. Sel : size of the selection. The selection is a collection of objects defined by the starting and finishing positions in the current PGC. In the current version of myDVDEdit, the selection isn't yet usable, but soon it will be very useful for defining zones to cut, copy or export. The Selection table provides the starting and finishing positions of the selection. displaces the cursor to the memorised position. memorises the current position as being the start of the selection. memorises the current position as being the end of the selection. The trash can in the current version is only there for information purposes. It allows you to know whether there are any objects present in the VOB files which aren't referenced by any PGC. These are therefore useless objects. A future version will allow you to delete them. One of the major benefits of myDVDEdit is it's Debug mode which allows you to execute step by step, each command in the DVD. To go to Debug mode, move the mode selector to Debug position, or click on Start Debug in the Debug menu. In Debug mode, the tabs in zone 4 change. Two new tabs appear: Regs and System Regs. The Regs tab displays the content of the General Registers, also called GPRM (General Parameters). There are 16 of these registers which are completely modifiable by program. They allow you to do arithmetical or logic operations, or to keep data between two playbacks in order to know what needs to be done next. Each register can contain an unsigned integer, with a value from between 0 and 65535, or between 0x0000 and 0xFFFF in hexadecimal. By default, myDVDEdit uses the letter R to designate the registers. If you prefer to call them GPRM, you can change the corresponding option in Preferences. When a register is modified by a command, it appears in red in the table. The command Set Register Mode allows you to modify the mode of the register and to change it into counter mode. The word Counter appears on the line and the value of the register automatically increases by one each second. The registers are reset to zero each time Debug mode is launched. The System Regs tab allows you to display the content of the System Registers, also known as SPRM (System Parameter). There are normally 24, but only the first 21 are currently used. They are 16 bit registers whose values determine the state of the player. If most of these registers are just in playback, some can be modified using special commands. By default, to simplify reading the commands, myDVDEdit uses a generic name to designate these registers, but if you prefer to use the term SPRM or a combination of both, you can modify the corresponding option in preferences. The registers are reinitialised to their value by default each time Debug mode is launched. A 2 character code (ISO 639 norm) for the country of the menu's preferred language, defined in the preferences. 0 to 7: current audio track number. Bit 6: 0 = masked subpicture, 1 = visible subpicture. 3 angle 1 to 9: current angle. 4 title 1 to 99: current title. 5 vstTitle 1 to 99: current VTSTitle. 6 titlePgc 1 to 32767: current PGC. 7 chapter 1 to 99: current chapter. 8 highlightedButton 1 to 36: current selected button. 9 navTimer 0 to 65535: automatic second counter, once at 0, the player executes the PGC saved in the navTimerPgc. 10 navTimerPgc 1 to 32767: Number of the PGC from the current VTS executed at the end of navTimer. 11 karaokeMixingMode Mixing mode of the audio during the playback of a karaoke disc. 12 parentalCountryCode 2 character code (ISO 3166) for the country of parental management. 13 parentalLevel 1 to 8: current parental level. 14 videoMode bits 8 and 9: 0=4/3 or 16/9, 1=pan&scan, 2=letterbox. bits 10 and 11: 0=4/3, 3=16/9. 15 audioCapabilities Each bit defines whether the player is capable of using a specific audio format. The main ones: bit 11=DTS, bit 12=MPEG, bit 14=Dolby Digital. 16 preferredAudio 2 character code (ISO 639) of the country of preferred language for the audio, defined in the preferences. 17 preferredAudioExt Type of audio track preferred: 0=not specified, 1=normal, 2=for visualy impaired, 3=Director's comments, 4=Alternative Director's comments. 18 preferredSubpicture 2 character code (ISO 639) of the country of preferred language for the subpictures, defined in the preferences.. 19 preferredSubpictureExt Preferred type of subpicture. In Debug mode, step by step buttons appear in zone 5, equivalent to the remote control of the user.. During the execution of the Pre/Post/Cells commands, it executes the current command. When a Cell is being read, it passes to the next Cell, or if it's the last, displays the next command to be executed. Only available if there is a button available on screen, it executes the command of the selected button. Executes all the following commands until the next Cell, or until the next breakpoint. Preceding Chapter button of the remote control. Does nothing if there is no preceding chapter.. Next Chapter button of the remote control. Does nothing if there is no next chapter. Menu button of the remote control. Stops playback of the current title and launches the VTS Menu (root) of the current VTS. Select Subtitle button. The selection is from among selected tracks in the Information tab and according to the order defined in the IFO parameters. Select Audio track button. The selection is from among the selected tracks in the Information tab and according to the order defined in the IFO parameters. Enter button to activate the current menu button. The command associated with the button is executed. Navigation buttons. Allow you to change the button selected. The choice of the next button selected is defined at the level of each button. The button has no meaning in the context of use. The option prohibiting the use of a button has been checked in the Prohibited User Options tab.
2019-04-25T19:44:11Z
http://www.mydvdedit.com/viewtutorial.php?t=16e&pg=forum
A context manager for providing a framework for enabling continuous customer access resource innovation by maximizing open business processes. The context manager allows multiple combinations of users to access various business processes through multiple types of customer access resources. The context manager includes a context manager management interface for creating a context manager object for a session, the context manager object providing a bridge from customer access resources to business processes and maintaining a context of the session across customer access resources. application Ser. No. 09/06/999, entitled "Quality Center and Method For A Virtual Sales and Service Center," filed on same date herewith by Charles McDonough et al., and assigned to the assignee of this application. All of the above-identified applications are incorporated by reference herein. This invention relates in general to a virtual Sales and Service center, and more particularly to a method and apparatus for connecting a customer to any type of sales and service resource through any access method at any time from any customer location. In the United States, telecommunications is an industry that is undergoing convergence. There is a good deal of discussion about the consolidation of computing and telecommunications into one overarching entity. There is also lot of talk about one wire to the home and one even larger wire or cable to the business. The trend toward universal data access has brought the focus of two technologies to the solution of a single problem, i.e., integrating telephones and computers to provide access and control of the data residing on both platforms. Computer telephone integration (CTI) is a technology platform that merges voice and data services at the functional level to add tangible benefits to business applications. CTI technology combines voice and data to form a foundation to support business applications, seamlessly combining functions from both the telephony world and the computing world. Over the years, telecommunications and data technologies have grown more alike. The independent features offered by telephones and computers become even more powerful, useful, and convenient when combined into voice processing applications running on computers. In today's business environment, the telephone is often the primary means of communication in many different situations: placing catalog orders, checking airline schedules, querying prices, reviewing account balances, and recording and retrieving messages. Usually, each telephone call involves a service representative talking to a caller, asking questions, entering responses into a computer, and reading information to the caller from a terminal screen. When organizations automate this process by linking their computer and telephone systems, they can lower costs, provide better customer service, increase the number of services available, and extend hours of operation. CTI lets customers, for example, use their touch-tone phone to check their bank balance 24 hours a day rather than walk to a cash machine or wait on hold for a customer service representative. And the marriage of phone and computer systems can identify incoming calls, route them to the appropriate person, and deliver the caller's file on a computer screen to the person answering the call--before the call is answered. Accordingly, the road to greater profit runs through a call center for high quality, low-cost customer acquisition and retention. CTI provides many benefit to consumers. For example, CTI allows consumers to spend less time on hold, improves response time for callers once they get through to the company, allows instant access to database information, often on a 24-hour basis; provides callback options for callers who don't want to stay on hold, provides access to service reps who, when freed from routine functions, have more time to research and answer complicated questions, and eliminates the need to repeat identification information and reason for calling when transferred to another employee or department. Increased telesales revenue, higher levels of referral and repeat business, fewer data entry keystroke errors, shorter transaction time, increased employee productivity, improved employee morale, and cost savings from operational efficiency. Today, the majority of CTI applications are being built for call centers. A call center is a customer business center where initial access is by telephone. Employees working in call centers provide services over the telephone. Their tasks can include placing outgoing calls, answering incoming calls, asking callers for information, or providing services. While handling calls, employees often use desktop computers to enter or retrieve information. Current call center routing techniques can be difficult to manage and do not simplify the interaction for customers. Routing services within a call center have traditionally been provided through caller initiated functions such as selecting one of several 800 numbers or making a particular selection in the Voice Response Unit (VRU). The routing services do not provide for an effective match of skilled employees with customer value and need. Multi-site call center routing is typically a simple percentage allocation of calls to various sites achieved through the network carrier. Overflow services are managed through the are-assignment of employees to queues. The goal in all these methods is to provide some level of improved service to the customer through a better match of calls to skilled employees and a better use of available employees. These approaches require many different mechanisms to provide call routing. These mechanisms include: various 800 numbers, network carrier load balancing, VRU routing to queues and static realignment of employees to queues. The typical CTI call center makes use of products and services from several different sources: public and private networks; voice switches, automatic call distributors, hardware and software from computer vendors, specialized business applications from software suppliers, and components such as voice response units, voice mail systems, call sequencers, predictive dialers, and fax machines. However, prior call center systems do not provide a framework to enable continuous channel innovation by maximizing open business processes. Further, prior call center systems lacked the capability to manage multiple customer access resources spanning a large-scale distributed business solution. As a result, business process have not been effectively used across multiple channels. Multiple customer access resources have not been bridged to underlying business processes. It can be seen that there is a need for a virtual customer sales and service center which connects any customer to any resource through any access method at any time from any customer location. It can also be seen that there is a need for a common technology platform which support all forms of customer interaction including customer self sales and service as well as employee assisted sales and service. It can also be seen that there is a need for a framework to enable continuous channel innovation by maximizing open business processes. It can also be seen that there is a need for a managing multiple combinations of users needing access to business processes through multiple types of delivery channels. To overcome the limitations in the prior art described above, and to overcome other limitations that will become apparent upon reading and understanding the present specification, the present invention discloses a context manager for providing a framework for enabling continuous channel innovation by maximizing open business processes. The present invention solves the above-described problems by providing a manager for allowing multiple combinations of users to access various business processes through multiple types of customer access resources. A system in accordance with the principles of the present invention includes a context manager management interface for creating a context manager object for a session, the context manager object providing a bridge from customer access resources to business processes and maintaining a context of the session across customer access resources. Other embodiments of a system in accordance with the principles of the invention may include alternative or optional additional aspects. One such aspect of the present invention is that the context manager management interface further comprises an information manager for managing information required during the session. Another aspect of the present invention is that an access coordinator is provided for coordinating access by a customer access resource to a business process, and wherein the context manager management interface provides a context for completing the business process for the session. Another aspect of the present invention is that a component integration architecture is provided, wherein a component comprises a reusable software module presenting an interface that conforms to an object model of a context manager object or a business object, the component being enabled and accessed at runtime through the component integration architecture. Another aspect of the present invention is that the context manager management interface keeps track of the contact manager object, customer objects, account objects, contextual information and session information. Another aspect of the present invention is that the customer access resources comprise customer service representative applications, Internet/PC applications, and kiosk applications. Still another aspect of the present invention is that an interface is provided for interfacing the business processes to the customer access resources. Another aspect of the present invention is that the business processes represent high level abstractions of multiple components/objects collaborating to deliver the requested response. Another aspect of the present invention is that a component interface is provided for presenting business processes to customer access resources, the customer access resources selecting business processes therefrom. Another aspect of the present invention is that the context manager management interface destroys context manager objects when a session ends. Another aspect of the present invention is that each context manager object maintains sets of relationships to all of the business objects that represent the context for a given session. Another aspect of the present invention is that the sets of relationships that the context manager maintains during a given session is dependent on the business processes the particular session is using. Yet another aspect of the present invention is that the context of a session is shared among customer access resources by sharing context manager objects. Another aspect of the present invention is that customer access resources can access the same context manager object to perform simultaneous, concurrent functions. Another aspect of the present invention is that a transaction manager is provided for preserving object integrity of the session by controlling access to the business activities. Another aspect of the present invention is that the transaction manager logically groups business activities into a business unit of work, a business unit of work involving a state change to one or more of the business objects that comprise the business activities. Another aspect of the present invention is that the context manager management interface ensures uniform handling of the business units of work that are defined by the business functions across the customer access resources. Another aspect of the inventions is that multiple context manager management interfaces can be run simultaneously to provide performance, scalability and fault tolerance and any customer access resource can access any context manager management interface. Another aspect of the invention is that context manager management interfaces can be grouped into modules and any number of modules can be deployed, providing massive scalability. Any customer access resource involved in a customer contact session can access the context manager management interface controlling that session regardless of what module the object manger resides within. These and various other advantages and features of novelty which characterize the invention are pointed out with particularity in the claims annexed hereto and form a part hereof. However, for a better understanding of the invention, its advantages, and the objects obtained by its use, reference should be made to the drawings which form a further part hereof, and to accompanying descriptive matter, in which there are illustrated and described specific examples of an apparatus in accordance with the invention. FIG. 11 illustrates the use of modules to achieve massive scalability. In the following description of the exemplary embodiment, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration the specific embodiment in which the invention may be practiced. It is to be understood that other embodiments may be utilized as structural changes may be made without departing from the scope of the present invention. The present invention is a Virtual Sales and Service Center that provides connection of customers to any type of sales and service resource through any access method at any time from any customer location. FIG. 1 illustrates a three dimensional representation of the Virtual Sales and Service Center access logistics 100. In FIG. 1, the y axis 102 represents the contact initiator. A customer contact may either be initiated by the customer 140 or by the company 142. The x axis 106 represents access method of the customer. Customers may access the Virtual Sales and Service Center from a wide variety of locations using a variety of methods. For example, a customer may access a company through the Internet 120. A customer may access a web page to retrieve customer or company information. The information at the web page may be unsecured information concerning a company's services and/or products. Alternatively, a customer may access secured, personal or private information via encryption, authentication and other digital security measures. Those skilled in the art will recognize that the invention is not limited to a particular instrumentality however. Other customer access methods may include direct pc access 121, e-mail 122, kiosk 123, phone 124, fax 125, mail 126, tv 127 etc. Those skilled in the art will recognize that the type of customer access method is not meant to be limited to the particular examples outlined herein. The invention provides the interface with any type of customer hardware and access method. The z axis 104 represents the resources accessed during the contact with the company. The types of resources accessed may include an employee 108. Employees 108 may be in thousands of locations ranging from large call centers with hundreds of persons to small offices or branches with a single person. The skills of employees may vary tremendously including product knowledge, language, sales ability, knowledge of specific customers, etc. As a result, the logistics associated with effectively matching customer contacts are particularly challenging and the benefits are particularly high. Other resources accessed by customers include the VRU 110, web server 112, fax server 114, video server 116, e-mail server 118, etc. Those skilled in the art will recognize that the type of resource is not meant to be limited to the particular examples outlined herein. The invention provides the interface with any type of resource existing within the company. FIG. 2 illustrates the types of customer interaction 200 included in the scope of this invention. The y axis 202 represents the initiator. A customer contact may either be initiated by the customer 210 or by the company 208. The z axis 204 represents the customer purpose. The overall purpose may be sales 216 or service 218. The x axis 206 represents the interaction style. The interaction style may be self-assisted 212 or assisted 212. FIG. 3 illustrates a functional diagram of the Virtual Sales and Service Center 300 according to the present invention. Depending on the customer's access method, a number of different resources may accept the initial contact with the customer. In many companies, the phone is a high volume access method. In the invention a cloud 310 is established to source calls to the Virtual Sales and Service Center 300. All calls, including local numbers 312 and 1-800 numbers 314, are delivered to this cloud 310. Within the cloud 310 are Voice Response Units (VRUs) 320 which play a script that is heard by incoming customers placing calls to the Virtual Sales and Service Center 300. The script played by the VRUs 320 enables a customer profile to be identified. The content of the script is then personalized for each customer, including matching the language being spoken by the caller. The VRUs 320 offer a convenient navigation interface and can both meet customer requests directly or initiate navigation to a resource that can handle the customer request. The VRUs 320 can also execute some cross-sell activities. Should it be determined that a call needs to be directed to a company employee or other resource, the VRUs interact with the routing engine 360 through the CTI interface 370 to initiate the transfer. The routing engine 360 accesses ANI and DNIS information, customer profile information 364, VRU activity thus far, the routing rules 366, resource profiles 363, and the resource status server 380 to select an available resource based on the customer's expressed or implied need. ANI is a service offered by telephone networks that provides the billing directory number associated with a calling party. When a customer calls an 800 number to order from a catalog, the call arrives at the call center with the caller's telephone number. The telephone number is passed to a CTI server 370. Organizations that maintain multiple 800 numbers can also use Dialed Number Identification Services (DNIS) offered by carriers to identify what the caller wishes to discuss. A bank, for example, can assign 800-555-1333 to VISA cards and 800-555-1334 to VISA Gold cards. The Virtual Sales and Service Center 300 according to the present invention may combine the use of ANI and DNIS with the other information available to it. Furthermore, CTI systems 370 using ANI make it possible for companies to capture information about abandoned calls. If a customer hangs up while waiting for any type of sales and service resource, employees can pro-actively call back customers and offer to be of assistance. Routing rules 366 are not based on a single queue or gate (e.g. Service) but can be governed by which resource skills can most accurately address the request. Once any type of sales and service resource has obtained a new skill or improved on an existing skill, it becomes a simple task of updating that skills profile 364. Similarly, if additional customer information needs to be included in the routing rules 366, the customer profile 364 is updated to include the routing criteria. Overflow rules within the routing rules 366 are also automated to allow for increased call center management. Upon obtaining all relevant available requirements information the routing application 360 will access the resource profiles 363 to find resources with the appropriate skills. This resulting set of resources will be used when accessing a statistics server 380 to determine which resources are currently available for the contact. The statistics server 380 provides a real time status of each of the resources' availability. A specific resource will then be selected based on resource availability, skill profiles, and load balancing. If the statistics server 380 indicates that the optimal resource is not available, the routing engine 360 will check its routing rules 366 for overflow processing. If the overflow resources are available, the call and associated data will be routed to those resources. If the original destination resource and the first overflow resources are not available, the call will be routed to another resource based upon the routing rules. The routing engine 360 then notifies the VRU 320 with the appropriate call treatment and routing authorization once a resource is selected. The VRUs 320 then send the call to the switch 330 which interacts with the CTI interface 370 to determine the appropriate employee 344 and phone 340. The CTI interface 370 also interacts with the workstation 343 associated with the phone 344 and ensures a screen pop, which provides the employee with the key information such as the customer identity, their need and the presence of a cross-sell opportunity, is delivered at the same time as the actual customer phone call is delivered to the phone 340 by the switch 330. Contacts may also arrive at a web server 354, a home PC direct connection server 356, a kiosk 353, an e-mail server 358 or a fax server 350. In all cases, every customer contact is immediately logged with the context manager 362. The Context Manager 350 manages the complexity of dealing with multiple customer interaction devices which must share common business processes. These business processes are distributed across many underlying, heterogeneous systems. The Context manager 362 provides for the management of information which is required over the life of a business event. The Context Manager 350 coordinates access to the appropriate Service Providers 368 and provides the Service Provider 368 the context to complete the business transaction. As a contact progresses, the VRU 320, the employee workstation 343, the web server 354, the kiosk 353, the fax server 350, the e-mail server 358 and the PC direct server 356 continually interact with the context manager 362. Contacts may be transferred between resources many times during the course of a call and this transfer activity is coordinated by the context manger 363 and the routing engine 360. If the contact is asynchronous or if there is work which was initiated but not completed during the course of a synchronous contact, resources may request that the context manager 362 place a request with an asynchronous queue server 392. The routing engine 360 will coordinate the subsequent matching of that request with an available resource which will most often be an employee 344, but may be other resource types. The Virtual Sales and Service Center 300 uses a suite of products to enable intelligent contact routing in a network cloud, including customer profiles 364, employee skills profiles 363, VRU options 320, availability of employees 340, and overflow management within the routing rules 366. In the preferred embodiment, Genesys computer telephony processing components 360 370 380 provides optimized and flexible solutions to transform the operations from simple interactions between phone calls and voice switch queues into sophisticated high value information exchanges that accomplish real-time matching of customer contacts through any access method with the appropriate resources. All components in FIG. 3 communicate via LAN-based TCP/IP messaging. This open, distributed architecture provides a scaleable and adaptable solution. FIG. 4 is an outline 400 of the business processes for operating and supporting a multi-site, virtual call center. The business processes 400 illustrated in FIG. 4 include routing 402, customer service support 404, supervisor/management support 406 and system support 408. The routing processes 402 are supervised and modified as necessary by the Quality Center. For example, the Quality Center may adjust the routing rules in cases of inclement weather forcing the shutdown of a particular location by routing calls to multiple sites 410. The Quality Center will provide customer service support 404 by defining escalation processes 420 and providing proactive support and feedback on sales/service performance issues 422. The Quality Center will also measure the performance of the Virtual Sales and Service Center 424, provide performance feedback 426 and manage the staffing, schedule and forecasting 428 for the Virtual Sales and Service Center 300. System support 408 will be provided by the Quality Center in the form of monitoring of 430 and reporting on 432 the system performance, and providing disaster recover and contingency procedures 434. According to the present invention, the routing rules are the step-by-step instructions which combine the routing components to identify what resource will receive a particular contact. Skills based routing methodology uses the skills and experience of each resource. The skills and experience of each resource is then matched against the requirements and characteristics of a particular contact to assign any type of sales and service resource to the contact. Contacts can be assigned to any resource that has the skills required for the contact. Similarly, overflow can be to any resource with the required skills. Intelligent rule based routing according to the present invention provides several advantages to a Virtual Sales and Service Center. First, rule based routing reduces customer confusion with few/one access numbers for all of a company's products. Service is improved by getting the customer to the right resource the first time thereby reducing transfers. Rule based routing provides distinctive service levels based on a customers relationship. Further, rule based routing can capitalize on "moment in time" relationship expansion opportunities by routing identified calls to skilled cross-sell and sales employees and can route callers to appropriate specialized employees based on callers request. The efficiency of all sales and service resources is improved by balancing contacts across the enterprise resource pool and management of these resources may be more automated through routing rules designed to automatically handle overflow situations. This means that fewer resources, particularly expensive human resources, are required to handle peak contact volumes while maintaining the desired customer experience. In addition, rule based routing allows positioning for mass customization of contacts based on customer indicated preferences. To remain competitive, businesses must retain their most profitable customers and pro-actively increase customer profitability. According to the present invention, implement this strategy by segmenting customers and allocating resource levels to each to so as to deliver the desired customer experience. For example, calls from the most profitable customers would be answered by a business's most skilled and experienced employees while calls from the least profitable customers can be answered by the least experienced and skilled employees. Many contacts into the Customer Service Center also offer unique cross-sale opportunities. For example, if a customer is calling to pay off a loan, then the customer may be interested in a new loan. If the customer has only a checking account, then the customer may be a candidate for other services. Customers should be routed to specially trained cross-sale specialists and/or customer interaction technology resources may be directed to issue cross sell messages to the customer when these unique cross-sale opportunities are identified. To facilitate this objective, Voice Response Units (VRUs) are scripted to identify the type of service the customer desires prior to transferring the call to any type of sales and service resource. This information will then be used to route the customer to a sales and service resource with the appropriate skills for that service. Specific requests for employee extensions can also be provided via a script. In prior systems, calls are often balanced between centers based on "expected" call arrival and staffing assumptions. However, the percentage of calls allocated to each call center must be manually adjusted when actual arrival rates do not match the "expected" arrival rates. According to the present invention, call routing is automatically balance calls between all locations because all employees in all locations are can be considered during any route request. Further, the present invention automatically routes calls to overflow employees when optimal employees are unavailable. Businesses need to develop a "relationship" with each customer. However, relationships are best developed when a customer speaks with any type of sales and service resource who is familiar with the customer and his or her needs. Call routing according to the present invention contributes to building customer relationships by routing calls to employees who have previously dealt with the customer. Rule based routing provides the customer the ability to request a specific employee or in the absence of a specific employee request, route the customer to any type of sales and service resource with whom he or she has previously spoken. If that particular employee is not available the customer should be able to request a call back from the employee. The rule based routing system also provides a framework where additional routing functionality can be easily developed for the Virtual Sales and Service Center. Intelligent routing technology assures that calls are routed to employees with the necessary skills to provide the highest quality of service to the calling customer. This technology utilizes information gathered from the customer profile and seeks to make an appropriate match to that of any type of sales and service resource's profile. Routing decisions are therefore not based on a single queue or gate (e.g. Service) but can be governed by which employee skills can most accurately address the caller's request. Once any type of sales and service resource has obtained a new skill or improved on an existing skill, it becomes a simple task of updating that employee's skills profile. Similarly, if additional customer information needs to be included in the routing decision, the customer profile is updated to include the routing criteria. Overflow rules are also automated to allow for increased call center management. Accordingly, an intelligent rule based routing system according to the present invention can provide single 800 number access for all products and services; pre-routing between multiple call centers based on availability of particular employee skill sets; skills based routing via employee and customer profile matching; call overflow management based on automated rules and pre-programmed next best routes; improved call management by reduced points of control; service level distinction based on customer value profile once customer is identified; and mass customization of routing based on detailed employee and customer profiles. FIG. 5 illustrates an overview 500 of the Context Manager 502. The Context Manager 502 provides management capability for multiple customer access resources 504 which share common business processes that may be distributed across many underlying, heterogeneous systems 506. The Context Manager 502 provides the management of information required over the life of the business event. The Context Manager 502 coordinates access to the appropriate business processes and provides them the context to complete each business unit of work. The Context Manager 502 provides the interface between the business process Service Provider 510 and the different channels 520-534. Channels are often defined very broadly. As a result, the different specific channel varieties 520-534 must be identified. These channel variants 520-534 are called customer access resources. The term customer access resource is used because the channels not only vary due to their specific purpose, but also vary in their behavior as they personalize the customer experience. While intelligent routing provides rich functionality, the data it uses to make decisions on call attributes must be processed very fast. Customer profiles, customer accounts, and traditional account data will be accessed by a VRU 540 and customer initiated VRU events will be passed to a Service Provider 510. The Service Provider 510 maintains the business logic in channel independent applets. The Service Provider 510 applies decision logic to determine the customers needs. The result will be a call routing profile that the intelligent routing engine will use to match against the centrally maintained employee profile. The result will be an intelligent routing rule based on custom call profile and skilled employee availability. The cloud 540 will pass pertinent routing information collected by the Service Provider from the VRU to Service Provider 510 interaction. The Service Provider 510 will then perform a logical combine of the VRU attributes and the customer profile attributes to determine the true routing attributes of the call. For example, in the context of banking, if a customer performed a loan payoff request in the VRU but did not have a cross-sell indicated on their customer profile, then the routing profile would indicate that a cross-sell was "Yes". This routing information may include tier, product(s), type of service, type of customer banking, language captured from DNIS, and number of transfers indicator. The present invention emphasizes the use of Component and Object Technology. The Object Management Group's (OMG) Common Object Request Broker Architecture (CORBA) may be used for distributed computing and object messaging. In this manner, product availability, openness and functionality goals can be satisfied. Multiple customer access resources spanning a large-scale distributed business solution are managed using the Context Manager framework. However, to rewrite business processes or create new "adapters" each time a new access resource becomes popular would not be economically feasible. Accordingly, the Context Manager anticipates a proliferation of direct customer access resources, some which are yet to be realized. Furthermore, the Context Manager addresses the issue of maximizing the reuse of business processes across those resources. Thus, the Context Manager reduces overall implementation cost, improves the time to market and ensures consistent customer experience across customer access resources. Context Manager accomplishes these business requirements by providing a framework that enables continuous channel innovation by maximizing the "openness" of the underlying business processes to create a "plug and play" type of interface. In this way, multiple combinations of users and delivery channels (called Customer access resources) may be created quickly using the best, most current technology and advanced usability characteristics for the specific need. These heterogeneous customer access resources may then be plugged into the Context Manager to utilize it's ability to leverage the existing, underlying business processes. The ability to manage the complexity of multiple Customer access resources which must share common business processes that may be distributed across many underlying, heterogeneous systems is handled by the Context Manager framework. FIG. 6 illustrates a Context Manager in a Retail Direct Banking virtual sales and service center 600. In FIG. 6, the Context Manager 602 is disposed between the Service Provider 610 and the Customer access resources 620. The business processes represented by a business unit of work 612, 614, for example, are dispersed among several components in the overall solution. These components make up the Service Provider layer of the solution and include business components and infrastructure components. The Context Manager 602 creates a Context Manager object 604, 606 for each session a user experiences with the overall business solution. These objects maintain the context of a session across customer access resources 620 and provide the bridge to the underlying legacy systems 630. The Context Manager 602 enables the business requirement to maximize reuse across the channels 620 due to the expected proliferation of direct customer access resources 620. The Context Manager 602 does this while reducing overall implementation cost, improving time to market, and ensuring consistent customer experiences across channels. The Context Manager 602 provides the management of information required over the life of the business events 604, 606. The Context Manager 602 coordinates access to the appropriate business processes 612, 614 and provides them the context to complete each business unit of work. The present invention emphasizes Component and Object Technology. A component is a reusable software module presenting an interface that conforms to an object model and which can be enabled and accessed at runtime through a component integration architecture. The Object Management Group's (OMG) Common Object Request Broker Architecture (CORBA) was selected as the basis for distributed computing and object messaging because of product availability, openness and functionality. Component and Object Technology provides several advantages. For example, components are the best vehicle for reuse. Where appropriate, components built with Object Technology are more reusable, more flexible and are built more productively. Trends also indicate there will be interoperability between the two major approaches to distributed object messaging, i.e., CORBA and Distributed Component Object Models (DCOM). Another advantage to Component and Object Technology is that there is a convergence between object messaging approaches and the Internet. Object technology is emerging as the best approach for advanced user interfaces and complex business applications which require flexibility and the need to evolve and change over time, while component based solutions allow for integration of "best of breed" components. Further, distributed object based messaging allows for the creation of "customer access resource" independent objects which can respond to messages from a variety of sources (e.g. Internet, call center, Voice Response Unit (VRU), etc.). Nevertheless, implementing a solution that is powerful enough to leverage a solution's enterprise business processes 612, 614 across multiple customer access resources 620 poses a number of challenges. As mentioned, the business processes 612, 614 represented by a business unit of work are typically dispersed among several components in the overall solution. These components make up the "Service Provider" layer 610 of the solution and include the business components like Account and Customer as well as infrastructure components 612, 614. Without the Context Manager framework, the Customer access resources 620 would have to manage and keep track of all the business processes 612, 614 as well as the objects that support those business processes in order to complete any business unit of work. As stated above, the Context Manager 602 creates a Context Manager object 604, 606 for each session a user experiences with the overall business solution. These objects 604, 606 maintain the context of a session across channels 620 and provide the bridge to the underlying legacy systems 630. The business processes represented by a business unit of work includes components 612, 614 which constitute the Service Provider layer 610 of the solution. For example, the Service Provider layer 610 includes business components, like customer 612 and account 614 objects, which interface to the Legacy Systems 630. Accordingly, for each call in the Voice Response Unit (VRU), the Context Manager 602 keeps track of context objects 604, 606, a customer object 612, account objects 614, and any other contextual, or session, information, (i.e. context across customer access resources 620 and transaction management). The Context Manager 602 also manages the session objects for each of the Customer access resources (e.g. Customer Service Representative Desktop applications, Internet/PC Banking applications, and Kiosk applications in retail outlets). Thus, the Context Manager 602 framework removes the management responsibility from the customer access resources 620 and places it on an architectural framework that can be leveraged across Customer access resources 620. Every request from any Customer access resource 620 comes through the Context Manager 602 before it is fulfilled by the Service Provider layer 610. In this way, it acts as an intermediary between the Service Provider Business Processes and the Customer access resource. As explained above, the Context Manager 602 interfaces the business processes 612, 614 that are available to end users to the Customer access resources 620. Examples of the kinds of banking business processes that can be made available through the Context Manager framework 602 include: Establish session, Verify Customer, Get Accounts, Get Balance, Get Transaction History, end session. These processes represent high level abstractions of multiple components/objects collaborating to deliver the requested response. Still, those skilled in the art will recognize that the applicability of the Context Manager is not meant to be limited to banking processes. The Context Manager may interface any type of business process to multiple customer access resources 620. In addressing the issue of maximizing the reuse of business processes across those customer access resources 620 by reducing overall implementation cost, improving the time to market and ensuring consistent customer experience across channels, the Context Manager plays several roles. The Context Manager plays several roles in delivering functionality to end users. FIG. 7 illustrates an intermediary model 700 where the Context Manager 702 functioning as an intermediary between the complexity of the Customer access resources 720 and the underlying Business Process subsystem 710. The Context Manager 702 essentially wraps multiple objects, and combines the public operations or functions of the underlying objects to provide a single interface. In this way, the Context Manager 702 acts as a bridge to the underlying system 710. Customer access resources 720 can request high-level Business Processes 710 of the Context Manager 702 through its component interface, and the Context Manager 702 relays the request on to the business components 710 that comprise the overall solution. FIGS. 8a-c illustrates the Context Manager performing a second role 800 by keeping track of "session" information or context of the user experience across Customer access resources. The Context Manager creates and destroys Context Manager objects as sessions start and end to manage the business processes. Each Context Manager object maintains a relationship to all of the business objects that represent the context for the given session. The sets of relationships that Context Manager maintains during a given session is dependent on the business processes the particular session is using. Staying with the banking example in FIG. 8a, a customer 802 calls a bank and establishes a Voice Response Unit (VRU) session 804. This session communicates directly with a Context Manager object 806. During the course of this session, the Context Manager object 806 establishes relationships with several Business Process Service Provider objects: a Contact object 810 representing the user's session, a Customer object 812 representing J. Jones, an Account object 814 representing account #1155, and another Account object 816 representing account #2233. The relationships maintained by this Context Manager object represents the context of this particular session. FIG. 8b illustrates a user 802 electing to switch from one customer access resource 820 to another 822 during a "session" (for example a user transfers from the VRU to a Call Center Customer Service Representative), the context can be transferred by simply forwarding the reference to the appropriate Context Manager object 806 to the second channel, or Customer access resource 822. This illustrates what happens when the customer 802 transfers from the VRU to a Customer Service Representative (CSR) in the Call Center 826. This transfer occurs by forwarding the reference to the Context Manager object 806 on to the CSR workstation. Since the Context Manager object 806 maintains the context of the session, the context of the call (the Service Provider contact, customer, and account objects) is preserved during the transfer process. FIG. 8c illustrates the establishment of a new session through one of the Customer access resources 830 by creating a new Context Manager object 832 each time a user requests information. The new Context Manager object 832 represents the session context for that specific interaction. In this case, a second Customer contacts 840 the bank through the Internet 836. A second Context Manager session is established for the Web customer access resource 830 so that each session communicates with its own Context Manager object 832. Thus, the context of one session is isolated from that of other sessions. Additionally, the context of a session can be shared among several Customer access resources simply by sharing the appropriate Context Manager object 806, 832. Each Customer access resource 802, 826, 830 can access the same Context Manager object to perform simultaneous, concurrent functions. Since the Customer access resources 802, 826, 830 do not maintain any context about the session, there are no issues with respect to data integrity or the misrepresentation of a business object's state. When dealing with heterogeneous objects distributed across multiple processing servers, transaction integrity is of critical importance. Transaction management is a fundamental piece of context management that guarantees and preserves the data integrity of the objects that comprise the overall solution. The ability to allow a diverse set of objects to participate in any given transaction becomes critically important when dealing with data that spans multiple objects and is stored in multiple places. FIG. 9 illustrates the fulfillment 900 of transaction management by the Context Manager. Context Manager 902 fulfills this role by assuming the responsibility for the initiation and coordination of multiple business activities. The Context Manager 902 logically groups these atomic business activities into a business unit of work. A business unit of work typically involves a state change to one or more of the business objects that comprise the various business activities. The Context Manager 902 ensures that the business units of work that are defined by the business functions are uniformly handled across the range of different Customer access resources. Again using a banking environment as an example, FIG. 9 illustrates a customer 910 calling a bank and establishing a session through a VRU 912. The customer 910 identifies themselves to the VRU 912, and decides to make an address change for all future correspondence. The Context Manager 902 initiates a transaction with the transaction service 920 and invokes the update address operation on the Customer object 922, which in turn registers interest in the transaction. Then, the Context Manager 902 invokes the update contact history operation on the Contact object 924, which in turn registers interest in the transaction. Next, the Context Manager 902 tells the transaction service 920 to commit the update address operation, and the transaction service coordinates the updates among the various components. FIG. 10 illustrates an entity relationship model 1000 for the Context Manager 1002 according to the present invention. The Context Manager 1002 keeps track of the contact object 1004 and customer object 1006. Furthermore, the Context Manager 1002 keeps track of contextual or session information, such as the context across customer access resources 1010 and transaction management 1020. The Context Manager 1002 manages the session objects 1002, 1004 for each of the Customer access resources 1010 as well as initiates and coordinates multiple business activities. FIG. 11 illustrates the use of modules to achieve massive scalability. Each module uses the services of an object request broker 1108 and all components in the module are registered with the object request broker 1108. Within a module there may be any number of context managers 1110 and supporting service providers 1112. Additionally, customer access resources 1114 may be assigned to a module for administrative purposes. However, any customer access resource can access any context manager in any module. All communications between resources in the module are supported by the underlying TCP/IP network 1120. By creating modules, the scale of distributed object processing can be controlled to whatever rate is consistent with currently available technology. Should the maximum practical module size be less than that required for the enterprise, it only requires the creation of multiple modules. The creation of multiple modules creates no new distributed object scalability issues. The only issue relates to the sizing of the TCP/IP network 1120 upon which inter-module messages travel. Such a sizing effort is routine and requires no knowledge of distributed object technology or concepts. In summary, the Context Manager addresses the issue of maximizing the reuse of business processes across those customer access resources by reducing overall implementation cost, improving the time to market and ensuring consistent customer experience across customer access resources. The Context Manager provides a framework that enables continuous customer access resource innovation by maximizing the "openness" of the underlying business processes to create a "plug and play" type of interface. In this way, multiple combinations of users and delivery channels (called Customer access resources) may be created quickly using the best, most current technology and advanced usability characteristics for the specific need. These heterogeneous Customer access resources may then be plugged into the Context Manager to utilize it's ability to leverage the existing, underlying business processes. Accordingly, the Context Manager provides the ability to manage the complexity of multiple Customer access resources which must share common business processes that may be distributed across many underlying, heterogeneous systems. The foregoing description of the exemplary embodiment of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not with this detailed description, but rather by the claims appended hereto. wherein each of the one or more components comprises a reusable software module presenting an interface that conforms to an object model of a context manager object or a business object, the component being enabled and accessed at runtime through the component integration architecture. a business process interface for interfacing the business processes to the customer access resources. a managing means for managing information required during the session. 4. The context manager of claim 1 wherein the managing means keeps track of the context manager object, customer objects, account objects, contextual information and session information. 5. The context manager of claim 4 wherein the managing means manages the context manager objects, customer objects, and account objects for each of the customer access resources. 6. The context manager of claim 1 wherein the customer access resources comprise customer service representative applications, Internet/PC applications, and kiosk applications. 7. The context manager of claim 1 wherein the business processes represent high level abstractions of multiple components/objects collaborating to deliver a response to a service request from a particular customer access resource to an appropriate business process. 8. The context manager of claim 1 wherein the component interface relays a selection from the customer access resources to the business components. 9. The context manager of claim 1, wherein the context manager further comprising means for destroying context manager objects as sessions end. 10. The context manager of claim 1 wherein each context manager object maintains sets of relationships to all of the business objects that represent the context for a given session. 11. The context manager of claim 10 wherein the sets of relationships that the context manager maintains during a given session is dependent on the business processes the particular session is using. 12. The context manager of claim 1 wherein the context of a session is shared among customer access resources by sharing context manager objects. 13. The context manager of claim 1 wherein customer access resources can access the same context manager object to perform simultaneous, concurrent functions. 14. The context manager of claim 1, wherein the transaction management means logically groups business activities into a business unit of work, a business unit of work involving a state change to one or more of the business objects that comprise the business activities. 15. The context manager of claim 1 wherein the managing means ensures uniform handling of the business units of work that are defined by the business functions across the customer access resources. 17. The context manager of claim 16 wherein the context manager management interface further comprises an information manager for managing information required during the session. the context manager management interface provides a context for completing the business process for the session. 19. The context manager of claim 16 wherein the context manager management interface keeps track of the contact manager object, customer objects, account objects, contextual information and session information. 20. The context manager of claim 19 wherein the context manager management interface manages the session objects for each of the customer access resources. 21. The context manager of claim 16 wherein the customer access resources comprise customer service representative applications, Internet/PC applications, and kiosk applications. 22. The context manager of claim 16 wherein the business processes represent high level abstractions of multiple components/objects collaborating to deliver the requested response. 23. The context manager of claim 16, wherein the component interface relays a selection from customer access resources to the business components. 24. The context manager of claim 16 wherein the context manager management interface destroys context manager objects when a session ends. 25. The context manager of claim 16 wherein each context manager object maintains sets of relationships to all of the business objects that represent the context for a given session. 26. The context manager of claim 25 wherein the sets of relationships that the context manager maintains during a given session is dependent on the business processes the particular session is using. 27. The context manager of claim 16 wherein the context of a session is shared among customer access resources by sharing context manager objects. 28. The context manager of claim 16 wherein customer access resources can access the same context manager object to perform simultaneous, concurrent functions. 29. The context manager of claim 16, wherein the transaction manager logically groups business activities into a business unit of work, a business unit of work involving a state change to one or more of the business objects that comprise the business activities. 30. The context manager of claim 16 wherein the context manager management interface ensures uniform handling of the business units of work that are defined by the business functions across the customer access resources. presenting the one or more business processes to customer access resources, the customer access resources selecting business processes therefrom. establishing a relationship between the context manager object and business process objects for satisfying the first customer request. 34. The method of claim 33, wherein the method further comprises switching from the first customer access resource to a second customer access resource during the session by forwarding the reference to the context manager object to the second customer access resource, the context manager object maintaining the context of the session including the context of the service request and the business process objects.
2019-04-23T16:32:23Z
https://patents.google.com/patent/US6064973A/en
India is facing one of its most serious droughts in recent memory - official estimates suggest that at least 330m people are likely to be affected by acute shortages of water. As the subcontinent awaits the imminent arrival of the monsoon rains, bringing relief to those who have suffered the long, dry and exceptionally warm summer, the crisis affecting India's water resources is high on the public agenda. Unprecedented drought demands unconventional responses, and there have been some fairly unusual attempts to address this year's shortage. Perhaps most dramatic was the deployment of railway wagons to transport 500,000 litres of water per day across the Deccan plateau, with the train traversing more than 300km to provide relief to the district of Latur in Maharashtra state. The need to shift water on this scale sheds light on the key issue that makes water planning in the Indian subcontinent so challenging. While the region gets considerable precipitation most years from the annual monsoon, the rain tends to fall in particular places - and for only a short period of time (about three months). This water needs to be stored, and made to last for the entire year. In most years, it also means that there is often too much water in some places, resulting in as much distress due to flooding as there currently is due to drought. So there is a spatial challenge as well - water from the surplus regions needs to reach those with a shortfall, and the water train deployed in Maharashtra is one attempt to achieve this. Going from science fiction to reality is the dream of many an engineer or inventor who has envisioned a flying-car commute or teleportation to the beach. It’s not usually the domain of practical defense policy wonks. But that’s what makes the Defense Department’s third offset strategy different. The so-named quest for conventional military deterrence against China and Russia through the Pentagon’s use of game-changing technology now has a bureaucratic brand inside the Beltway. The third offset also has a budget, some $18 billion, to spend on fulfilling a vision of a future in which electromagnetic railguns shoot down hundreds of incoming cruise missiles, lasers slice through enemy warships, and robotic wingmen fly in first on the deadliest missions. This beguilingly simple premise, however, lacks a broadly understood plan to execute this vision. Absent crystal-clear orders from on high and with plenty of the usual reasons for inaction, over-cautiousness is likely to prevail. Yet for the bolder members of America’s private sector, this is a moment of great opportunity to shape the next generation of military innovation. It is time for a new model for government-industry collaboration, risk assessment, and strategic vision that can enable the kinds of ambitious military capabilities the United States must aspire to lead in the 21st Century. As the third offset transitions from policy to execution, its supporters now face the challenge of taking the conversation from the Pentagon’s E-ring to corporate boardrooms and innovation centers across America before the clock winds down on the Obama administration. For the next administration to carry on with the third offset, an important legacy will be the private sector’s understanding of the importance of prototyping, open innovation, and next generation manufacturing methods. This reinvigorated defense industrial base now is in the position to reimagine and reinvent their existing technologies and platforms in new and novel ways, a key third offset objective. According to a U.S. State Department report published earlier this week, there were 11,774 terrorist attacks worldwide in 2015, with 28,328 fatalities. Iraq experienced the most attacks last year, amid its struggle against the so-called Islamic State. There were 2,418 incidents in Iraq with nearly 7,000 fatalities. The report found that Iran remains the primary state sponsor of terrorism, providing a range of support, including financial, training, and equipment, to groups around the world. This chart shows terrorist attacks and total deaths worldwide in 2015. Mahmuda Khanam left her apartment in Chittagong, Bangladesh, on June 5 to walk her 6-year-old son to a school bus stop. On the way, they were approached by three men who stabbed her repeatedly, then shot her point-blank in the head, leaving her dead on the pavement with the shocked child. The assailants sped away on a motorcycle. This was not an act of random violence. It was an attack carefully targeted to punish Mahmuda Khanam's husband, Babul Akter, a senior Bangladeshi police official. As leader of the Detective Bureau in Chittagong, Akter had been instrumental in several investigations involving militants over the past two years, including one that led to the arrest of the military chief of the Jamaat-ul-Mujahideen Bangladesh in October 2015. In fact, Akter had been so effective in combating militancy in the Chittagong area that he had been promoted to a senior police post in Dhaka, Bangladesh's capital. According to news reports, he had moved to Dhaka to assume his new duties just days before his wife's murder, leaving her and their two children behind. The method of attack in this case was similar to those that have been used by jihadists in Bangladesh against bloggers, university professors, foreigners and religious minorities. When the method of attack is combined with Akter's past investigations of jihadist militants, it is not hard to conclude that this was intended as revenge. But instead of targeting the armed and trained Akter personally, the attackers chose a much softer target. Bangladesh is currently an arena of competition between al Qaeda- and Islamic State-oriented jihadists. As such, it can be seen as a microcosm of the larger ideological struggle for the heart of the global jihadist movement. Over the past year, in a kind of macabre competition, militants associated with both groups have attacked targets they regard as posing a challenge to their brand of Islam. The Censor Board’s decision last week to snip away all drug-related references to Punjab in the film Udta Punjab highlights the seriousness of the drug problem in that state. Senior leaders of the state’s ruling party can engage in whataboutery and call the drug menace a national problem, but data show that over the last decade Punjab has ranked consistently at the top or in the top 5 in many of the yardsticks used to measure drug abuse. That Punjab’s drug menace is a serious problem is evident from the fact it is perhaps the only state in recent times to commission a drug abuse study. The Punjab Opioid Dependence Survey, which was conducted between February and April 2015, found that 230,000 people in the state were drug users. That translates to 836 drug users per 100,000 people in the state. The All India number is 250 per 100,000 (for 2012), according to the ministry of social justice and empowerment. Consider the number of crimes reported under the Narcotics Drugs and Psychotropic Substances (NDPS) Act. There were on average 7,524 instances of crimes related to drugs in Punjab annually between 2005 and 2014. That’s second only to Uttar Pradesh, India’s most populous state. Look at the rate of crime per 100,000 population — Punjab fares far worse than any other state. In 2014 alone, the rate of reported NDPS crimes jumped to 50.5 per 100,000 population — four times that of second ranked Maharashtra with a rate of 12.4. Punjab ranks among the top five states that reported the biggest drug seizures in 2014. The other four were Mizoram, Manipur, Assam and Uttar Pradesh. Here’s another statistic that places Punjab on top: about 44.5% of total convicts under the NDPS Act in India at the end of 2014 were in Punjab, and the figure has consistently increased over the years. While this could mean that the state is fighting hard to combat this problem, it also highlights the extent of the drug abuse menace in that state. Suicides due to drug abuse or addiction made up 2.8% of all suicides in India in 2014. In the case of Punjab, this stood at 4%. Drug-related suicide deaths in Punjab have decreased between 2011 and 2014, but it still figures among the top five states. The Punjab Opioid Dependence Survey found that 89% of opioid dependents in Punjab were literate and educated, 83% were employed and they were mostly male. Chart 6 has the details of the survey. The biggest hurdle in the follow-up on Prime Minister Narendra Modi’s visit to Iran will be the Indian bureaucracy. The security establishment, which has profound difficulty in understanding Iran and sees it as the land of Islamic fundamentalists, never wanted India to get entangled with that country. As for the foreign-policy establishment, it is guided by the Israeli cat whiskers, and is certain that the Iran nuclear deal has no future, and, therefore, what is the point in hurrying. To be sure, Nitin Gadkari has a job on his hands to get the bureaucracy cracking with the implementation of the projects in the pipeline with Iran. Our bureaucrats can learn a few things from the way Americans do business with Iran. Here is a country that Iranians used to call Great Satan. Here is a country that we foolishly imagine to be the playpen of the Israeli Lobby. Here is a country that we think is disinterested in seriously engaging with Iran. And, yet, in April, as soon as it became known that Airbus had secured a $27 billion deal for supplying 118 passenger aircraft to Iran, Washington lost no time to take a historic decision – to give a ‘waiver’ enabling Boeing to send a team down to Tehran (despite the US sanctions against Iran) to figure out the business prospects. The Iranians of course were happy to receive the team and both sides kept the pretence that this was an opportunity to get to know each other. Now, two months down the line, Reuters has figured out that Boeing has already begun working on a deal to supply around 100 aircraft to Iran – almost on the same scale as Airbus. The stunning part is that Boeing cannot do business with Iran within the ambit of the existing US sanctions. Despite the argumentative chaos of Indian democratic life, where his proponents and opponents continue to slug it out, Narendra Modi is widely seen abroad as a leader who signifies energy and hope for an aspirational India. His coming to office unleashed a surge of expectations, and that tide has not receded. The sustained American outreach and his embrace of the prospect of and increasingly tangible reality of interlocking interests between the world’s two most important democracies is very much a part of the Modi-era zeitgeist. This week Mr. Modi went to Washington again, his visit a powerful and evocative celebration of what is now termed an enduring global partnership between two key democracies, both countries of the Asia-Pacific world. This relationship is an ever-evolving one, increasingly multifaceted. Foreign Secretary S. Jaishankar termed this visit, the second bilateral visit made by Mr. Modi to the United States in two years, as a “consolidation”. The joint statement issued during the visit, on June 7, spoke of the two countries pledging to “provide global leadership on issues of shared interest”. The opening of the doors of the Capitol, as Mr. Modi termed it, during his address to the two Houses of Congress, of this “temple of democracy” as he said, drawing reference also to Abraham Lincoln, signified in many ways the coming round of the circle of redemption for a political personality who, till his coming to office as Prime Minister of India, had been denied a visa to enter the U.S. Pushing the right buttons, knowing how to win American friends, speaking an idiom understood by Americans, he demonstrated perfect pitch in his homage to the memory of American servicemen buried in Arlington Cemetery. In the past, falling oil prices were seen as a net benefit for the global economy, and stock values therefore rose when prices fell. Cheap oil is a form of consumer stimulus; the rule of thumb has been that a fall in price of $10 a barrel boosts global GDP growth by about 0.2 percentage points.1 Importers benefit a little more than exporters suffer. So what’s common between Chicken Country Captain, Railway Mutton Curry, Dal Tadka and the Ledikeni? Aside from the fact that each one of them belongs to a modern community (Anglo Indian mostly) and has been the creation of a chef with very limited resources (like seriously limited), it is their common ground of birth: The Dak Bungalow. Interestingly much to the toast Dak Bungalow food is made out to be these days (courtesy food writers and chefs rediscovering them today), its beginning (and the subsequent existence) wasn’t all that grand. In fact, many of the earlier Dak Bungalows that were set up by the British back in 1840 to accommodate their kins and brethrens during long road travel was much far from success. Fascinatingly, the very reason for Dak Bungalows was for comfortable stay and palate friendly food – more of the latter – especially when the British found traveling by water to distant places an ordeal. It all began with Lord Auckland in 1837. Eager to take first-hand account of the newly acquired colony, Auckland decided to travel from Calcutta to Simla in a budgerow with a retinue on par with that of Emperor Akbar’s wedding party. The little battalion is said to have made of 850 camels, 250 horses, 140 elephants, 12,000 men – almost four times the Army used by Shivaji to regain half his empire. The traveling party also included a few friends, their wives and associates and a French chef by the name of St. Cloup. The budgerow he travelled in had cabins with Venetian windows making it comfortable as it gave right amount of sunlight (sometime too much of it) and fresh air. But when it came to food, everything came spiraling down. Or as one of the members of the large travel party described – the food that came out of the cook boat, whose duties was to collect fresh produce and cook as per our taste, was rather filthy and confusing. A “dak bungalow” in Kenya, c. 1900. The term was sometimes applied to similar structures throughout the British Empire. Now that they are well on the road to ‘robotization’, military organizations will have to pay closer attention to the above issues. Kalyan Kemburi believes that specifically means combating organizational inertia, changing procurement practices, defining the nature of ‘manned-unmanned teaming’, accounting for the reemergence of mass in conflicts and much more. This commentary was originally published by the S. Rajaratnam School of International Studies (RSIS) on 2 June 2016. First, organisational inertia: Currently men and women across the military rank and file operate high-end unmanned systems such as UAVs. Most of the missions undertaken by these systems are mundane and repetitive in nature predominantly focused on surveillance and reconnaissance. To use highly trained soldiers for these kinds of tasks could increasingly prove to be both operationally and financially unsustainable; therefore, one of the more judicious use of resources might be to recruit and train specialists who specialise in operating these systems. Second, procurement procedures: The prevailing development and acquisition producers for legacy platforms involve billions of dollars in investments spread over two to three decades. Rapid technological changes along with the dynamic nature of the geostrategic landscape make many of these systems obsolete and/ or irrelevant to the emerging mission requirements. Automated assembly lines with 3-D printing have the potential to fundamentally change the prevailing R&D and acquisition procedures. With rapid prototyping of new systems along with rapid scaling of production, not only the production cycles for legacy systems substantially reduced, but also the production of unmanned systems potentially decentralized. Third, democratisation of technology: The dual use nature of the robotic systems and their commercial availability allows relative ease in their acquisition by non-states actors and technologically less advanced states. Many of the civilian and military autonomous systems share the same basic sub-systems and sensors. For example, iRobot’s’ Packbot military robot has its roots in its civilian counterpart. Therefore, the threshold to weaponise an unmanned/ robotic system is very low compared to other dual use technologies such as nuclear or biotechnology. Geopolitical Futures logo Please feel free to forward this email to friends and colleagues! Reality Check A daily explanation of what matters and what doesn't in the world of geopolitics. June 10, 2016 By Kamran Bokhari Central Asia: The Next Region to Unravel The instability in Kazakhstan in recent weeks could spread throughout the region. While the world is focused on the crises in the Middle East, the European Union, Russia and China, Central Asia – located at the center of these regions – is in meltdown. Central Asia cannot avoid being affected by the chaos in the countries surrounding it and is at risk of destabilization. The largest and wealthiest state in the region, Kazakhstan, is most at risk. In recent weeks, Kazakhstan has been hit by two types of security challenges: civil unrest and terrorism. In May, Kazakh law enforcement agencies broke up demonstrations across the country, protesting plans to privatize large swathes of farmland. The government of President Nursultan Nazarbayev and its ally, Russia, believe these protests were backed by the U.S. and designed to foment a color revolution. Considering the large area covered by the protests and the fact that this is an authoritarian state that does not tolerate any genuine opposition, the idea that the West was trying to push Kazakhstan into a Ukraine-like revolution is not unreasonable. While Astana was still grappling with this issue, the country was rocked by a terrorist attack that killed 19 people on June 5. It was carried out by suspected Islamist militants in the northwestern industrial city of Aktobe. The attack, which involved 20 gunmen who struck at three separate locations, appears to have been a fairly sophisticated operation – at least for Kazakhstan, where such incidents are quite rare. Two cells struck at two separate firearms stores, while a third commandeered a bus and used it to ram the gate at a national guard base. Nazarbayev, who is 76 years old and has ruled the oil-rich country for a quarter of a century, issued a statement warning that foreign forces were out to destabilize the country. Whether foreign actors played a role in either of the two incidents remains unclear. But it appears that both pro-democracy and jihadist forces are challenging the regime. For Geopolitical Futures, this is not surprising. Our forecast for the current year predicted that Central Asia is headed toward a crisis. Join Thousands of Satisfied Readers – Subscribe Today! Our position has been that the Central Asian states will destabilize because the world around them has descended into turmoil. The Middle East is in chaos because of the meltdown of autocratic regimes, which has enabled the Islamic State to emerge as a major international security threat. The European Union has become an incoherent entity and faces an uncertain future as Germany deals with a looming export crisis. To the east, China’s growth miracle has come to an end. Finally, Russia, which wields the most influence in Central Asia, is in deep trouble because of the plunge in oil prices. Therefore, it is impossible for Central Asia to remain an island of stability in the middle of an ocean of chaos. Though we are at the beginning of the unraveling, the events in Kazakhstan show that our forecast is on track. For over two decades, the country’s leadership has maintained stability largely because of revenues from crude oil exports. It was a country built on oil wealth, with Western and Chinese investor interest and a strong alliance with Russia. With the steep decline in oil prices, the Nazarbayev regime is struggling to maintain order. Nazarbayev and his top associates have been trying to deal with rampant corruption in the armed forces. The impending leadership transition (due to Nazarbayev’s age) and the weakening of the authoritarian system are creating space for a host of actors who until now were kept at bay. It will be a while before the instability metastasizes in Kazakhstan, and at this early stage it is difficult to know how events will unfold. However, there are few arrestors in the path of this trajectory. Those forces seeking democratic change seem weak, while those with an Islamist agenda in this Muslim-majority nation seem more powerful – in no small part due to their use of armed insurrection. Therefore, the country may turn into a large ungoverned space while the world continues to hope democrats will replace the Soviet-era regime. This is similar to the Arab Spring, which the West hoped would bring democracy to the Arab world; this hope soon faded. The Arab Spring started in a relatively small North African country, Tunisia, but then quickly spread across the Middle East. In Central Asia, the instability has started in the largest country in the region – leaving other Central Asian states vulnerable. When Kazakhstan destabilizes, Turkmenistan, Uzbekistan, Tajikistan and even Kyrgyzstan (which has already gone through one popular uprising) will not be far behind. All these states, along with Russia, have long been worried about how post-NATO Afghanistan could destabilize Central Asia and even Russia. However, the biggest state in the north of the region, far from Afghanistan, is actually where the unrest has begun. The geopolitical precipice that Central Asia is now standing on – highlighted by the events in Kazakhstan – suggests that the fallout from a resurgent Taliban in Afghanistan may be just a footnote in the story of how this region foundered. The instability in Kazakhstan in recent weeks could spread throughout the region. While the world is focused on the crises in the Middle East, the European Union, Russia and China, Central Asia – located at the center of these regions – is in meltdown. Central Asia cannot avoid being affected by the chaos in the countries surrounding it and is at risk of destabilization. The largest and wealthiest state in the region, Kazakhstan, is most at risk. In recent weeks, Kazakhstan has been hit by two types of security challenges: civil unrest and terrorism. In May, Kazakh law enforcement agencies broke up demonstrations across the country, protesting plans to privatize large swathes of farmland. The government of President Nursultan Nazarbayev and its ally, Russia, believe these protests were backed by the U.S. and designed to foment a color revolution. Considering the large area covered by the protests and the fact that this is an authoritarian state that does not tolerate any genuine opposition, the idea that the West was trying to push Kazakhstan into a Ukraine-like revolution is not unreasonable. While Astana was still grappling with this issue, the country was rocked by a terrorist attack that killed 19 people on June 5. It was carried out by suspected Islamist militants in the northwestern industrial city of Aktobe. The attack, which involved 20 gunmen who struck at three separate locations, appears to have been a fairly sophisticated operation – at least for Kazakhstan, where such incidents are quite rare. Two cells struck at two separate firearms stores, while a third commandeered a bus and used it to ram the gate at a national guard base. Nazarbayev, who is 76 years old and has ruled the oil-rich country for a quarter of a century, issued a statement warning that foreign forces were out to destabilize the country. Whether foreign actors played a role in either of the two incidents remains unclear. But it appears that both pro-democracy and jihadist forces are challenging the regime. For Geopolitical Futures, this is not surprising. Our forecast for the current year predicted that Central Asia is headed toward a crisis. Our position has been that the Central Asian states will destabilize because the world around them has descended into turmoil. The Middle East is in chaos because of the meltdown of autocratic regimes, which has enabled the Islamic State to emerge as a major international security threat. The European Union has become an incoherent entity and faces an uncertain future as Germany deals with a looming export crisis. To the east, China’s growth miracle has come to an end. Finally, Russia, which wields the most influence in Central Asia, is in deep trouble because of the plunge in oil prices. Therefore, it is impossible for Central Asia to remain an island of stability in the middle of an ocean of chaos. Though we are at the beginning of the unraveling, the events in Kazakhstan show that our forecast is on track. For over two decades, the country’s leadership has maintained stability largely because of revenues from crude oil exports. It was a country built on oil wealth, with Western and Chinese investor interest and a strong alliance with Russia. The provision of clean, safe drinking water in much of the world is one of the most significant public health achievements of the past century - and one of the foundation stones of a healthy society. In the developed world, most people are able to take this service for granted and pay very little for it. In the UK, water services are based on legacy infrastructure systems; the country lives off Victorian engineering. These systems are ageing and deteriorating and will require unprecedented investment to be fit for the future. Therefore the country needs to reimagine its water services to deliver water sustainably via systems that are affordable, adaptable and resilient. Global population growth is threatening the security of water supply and when coupled with the impacts of climate change, it is clear that our historical approach to the provision of water may not remain feasible. Increasingly stringent drinking water quality and environmental discharge standards protect us from pollutants but require increasingly complex and energy-consuming treatment. Leakage of water from ageing infrastructure wastes more of this precious resource, yet the costs of replacing that infrastructure seem insurmountable. Perhaps it is time to reconsider the one-size-fits-all approach of large centralised infrastructure and instead pursue a suite of solutions tailored to local needs. Could it be possible to have water systems that have no adverse impact on the environment, or better yet - water systems with positive impacts for people, society, the environment and the economy? A while back I came across a rather awful piece on The Economist's website entitled A Marxist Theory is (Sort Of) Right. The piece is indicative of what I think to be a far more general trend in contemporary intellectual life: namely, the fact that Marxism exists as a sort of weird counterpart to what we generally call the ‘conventional wisdom’. Before I saw the article in The Economist, I wrote a post dealing with JK Galbraith and what he called the ‘conventional wisdom’ but perhaps I should again provide a nice quote from his The Affluent Society that lays out once more what the conventional wisdom is. That’s a rather nice summary: the conventional wisdom is characterised by ideas that are stable, predictable and, above all, familiar. With this in mind we can approach The Economist article but first a word on the publication. The Economist magazine is perhaps the prime organ that disseminates the conventional wisdom that exists in the economics profession today. It is geared toward a popular audience — unlike, the far more sophisticated and specialist Financial Times — and can thus regularly be found, for example, in the dentist’s waiting-room. Whereas the Financial Times is a serious organ that seeks to provide real, tangible information in fairly concentrated form to an audience that actually uses such information in their professional lives, The Economist is better thought of as a sort of upmarket glossy magazine providing whimsy for a middle manager or a lawyer awaiting a filling or a root canal. The contrast could not be more glaring. Since we must compare ourselves with India, that being our eternal yardstick, we must take this in: the Indian prime minister addresses the US Congress and draws repeated applause. Pakistan, meanwhile, presents a fractured picture, democracy in place but the army in charge and calling the shots. The army chief snaps his fingers and PML-N ministers dutifully line up at General Headquarters, in the same hallowed room next to the chief’s office where corps commanders’ meetings are held. We are not good even at keeping up appearances. The same meeting could have been held at the foreign office, thereby at least preserving the fig-leaf of civilian supremacy. But I suppose a point had to be made and it could not have been made more dramatically. The picture released by the army’s spin machine, ISPR, says it all: the commanders lined up on one side and the civilians, their haplessness on full display, on the other. The army is in command and the civilians, on key fronts, have abdicated authority…not at the point of the bayonet, let it quickly be said, but as the fruits of incompetence. For all its democratic authority, the PML-N leadership, from the prime minister to his talented brother, is incapable of conducting a sustained dialogue on pressing national issues with the army brass. The mental wherewithal is simply not there…period. They have political skills no doubt, and very sharp at that when it comes to preserving their power and promoting their business interests. In these things no one in Pakistan can come close to them. But engage them in abstract talk on, say, national security or foreign policy and the blank looks that emerge have to be seen to be believed. As I have had occasion to mention before, when they were taking their first political steps their favourite method of inter-acting with important generals was to make them gifts of BMW cars. If a general accepted – and there was talk in those days that some upright generals did – he was considered a good fellow, trustworthy and dependable. Anyone who refused was looked at with suspicion. After a long hiatus, George Soros has returned to trading, lured by opportunities to profit from what he sees as coming economic troubles. The moves are a significant shift for Mr. Soros, who earned fame with a bet against the British pound in 1992, a trade that led to $1 billon of profits. In recent years, the 85-year-old billionaire has focused on public policy and philanthropy. He is also a large contributor to the super PAC backing presumptive Democratic nominee Hillary Clinton and has donated to other groups supporting Democrats. Mr. Soros has always closely monitored his firm’s investments. In the past, some senior executives bristled at how he sometimes inserted himself into the firm’s operations, usually after the fund suffered losses, according to people familiar with the matter. But in recent years, he hasn’t done much investing of his own. That changed earlier this year when Mr. Soros began spending more time in the office directing trades. He has also been in more frequent contact with the executives, the people said. A car is seen on fire at the site of a drone strike believed to have killed Afghan Taliban leader Mullah Akhtar in southwest Pakistan in this still image taken from video, May 21, 2016. The drone strike that targeted the Taliban leader points to a troubling future for Pakistan’s largest province. In July 2015, when Afghan intelligence reported that Taliban leader Mullah Omar had in fact been dead for two years, the Taliban chose as their new leader Mullah Akhtar Mansour. That selection created a fissure in the Afghan Taliban. In recent months, rumors emerged that Mullah Mansour was killed in Kuchlak – a town about 25 km from Quetta, and home to half a million mostly Afghan refugees – in a gunfight with a rival faction. The Taliban sought to quash the rumors by releasing an audio message in which Mullah Mansour denied he had been killed. Mullah Mansour was heading towards Quetta when his car was targeted by a drone, killing him and his driver Mohammad Azam in Balochistan’s Nushki district. Initially, local authorities said the car was carrying explosives, which caught fire. But locals said they saw “a small airplane” – by which they mean a drone – hovering over the car after the attack.Enjoying this article? Click here to subscribe for full access. Just $5 a month. Some observers have pointed out that targeting Mullah Mansour would have been impossible without ground intelligence. They go on to note that ties between the U.S. and Pakistan have been strained. So, they conclude, perhaps in this case Pakistan collaborated quietly? Government officials vehemently deny any such claims. Sarfraz Bugti, Balochistan Home Minister, condemned the strike, calling it an attack on the sovereignty of Pakistan. The second round of agitations has started in Nepal. There seems to be a new game plan from the Sanghiya Gathbandhan or Federal Alliance and getting back into the agitation mode? How is the government likely to react? The Janajati and Madhesis have joined forces and formed a Federal Alliance known as the Sanghiya Gathabandhan to protest together in Kathamndu. The Madhesis seem to be striving forthe recognition of all the marginalised ethnic groups in Nepal. The new coalition seems to be getting support, as they are seen fighting for a just cause. A federal state is not just the demand of the Madhesis, but the majority in Nepal. The constitution demands dividing Nepal into seven federal units. There is no basis on which this division has been made. If this proposal moves forward then people belonging to the same ethnic group will be split into different units. This is something which is opposed by all the ethnic groups. The Madhesis have been successful in mobilising the support of other ethnic groups. This is not because the ethnic groups in Nepal are sympathising with the Madhesis, but the demands of the latter resonate with their own. Unlike the blockade earlier, the new agitations that have started in May 2016 are not the demands of the Madhesis alone; it is a call for recognition by all the marginalised ethnic groups in Nepal. Is the government listening? The decision to move the protests to Kathmandu have been a stroke of genius. It has produced the outcome that the Madhesis had wanted - mobilise support and increase momentum for the movement. Now that the Madhesis are not alone in their cause, utmost caution is required moving forward. They should stick to their demands and not let the success to undermine their larger goal. There have been recent reports that the majority of the Morcha allies have threatened to withdraw from the alliance because, some leaders are seen as using it for their political interests disregarding the concerns of the Morcha allies. Divisions within the alliance will have a larger impact on the movement and result in its failure. This could gravely affect the momentum that the movement has picked up during May 2016. A migrant worker’s death sharpens criticism of India’s inability to protect its overseas laborers. NEW DELHI — India is currently roiled by the death of Mahavir Yadav, a 57-year-old migrant worker who went to Saudi Arabia in 2010 to work as a painter and never returned. News suddenly arrived last month that Yadav had died of a heart attack triggered by extreme stress and work conditions that were almost Dickensian in their wretchedness. His employer — a local Saudi — would beat the elderly worker, deprived him of his salary, and also confiscated his passport. Yadav’s two young daughters in India, now orphaned, are running from pillar to post to retrieve their father’s body, which lies in a Saudi Arabia hospital, mired in complex legal formalities. The duo have even written to India’s External Affairs Minister Sushma Swaraj to help them. This is not a stray incident of a single Indian migrant worker’s cruel death in the Middle East. Many such heartrending tales have surfaced since 1983, when the Indian exodus of migrant workers to the region began following the great Gulf boom of the 1970s. The Indian government passed a new Migration Act to promote migration and cash in on the twin opportunities of foreign exchange remittances as well as overseas employment generation. Due to its economic attractiveness, oil-drenched UAE became a popular destination for temporary Indian labor migrants seeking gainful employment and higher standards of living.Enjoying this article? Click here to subscribe for full access. Just $5 a month. India’s globalization in the 1990s further expedited this cross-border movement of people from the country, making it one of the largest labor exporting nations in the world. However, the exodus also brought with it a panoply of fraud and exploitation cases of Indian workers in the host countries. To tackle the new exigencies, India substituted the 1983 law with the Emigration Act of 2001. The Ministry of Overseas Indian Affairs (MOIA) was also constituted to sign manpower agreements with five Gulf countries, excluding Saudi Arabia. Marc Lynch, The New Arab Wars: Uprisings and Anarchy in the Middle East (PublicAffairs, 2016). On May 19, 2011, President Barack Obama stood in the ornate Ben Franklin Room on the State Department’s 8th floor and called for a broad change of approach in America’s engagement with the Middle East, making clear that he backed political and economic reform. Responding to the dizzying first six months of the Arab Spring, Obama reiterated America’s enduring security interests, yet acknowledged that grievances had accrued among ordinary people that “only feed the suspicion that has festered for years that the United States pursues our interests at their expense.” The speech was lauded for its sharp diagnosis of the problem and willingness not to pull punches. It was also widely interpreted as a dramatic swing away from Obama’s customary caution and pragmatism — many commentators remarked about its exuberance. Back then, the Arab Spring was still seen as reason for optimism. For many of us then working in the White House at the time, it recalled the heady days of 1989, when walls came down and the Cold War ended. As Obama told us in the weeks before the speech, he wanted to do some “truth-telling” about what was going on and how the United States needed to embrace this transformation and change its approach. Yet toward the end of his speech, Obama made mention of three places that, unintentionally, foreshadowed the challenges to come. Despite all the uncertainties, he wanted to recall the reasons to have hope. He cited the examples of the Libyan city of Benghazi, at that moment protected by U.S. and allied planes attacking Qadhafi’s forces; young people cramming Egypt’s Tahir Square to demand political change; and the protestors in Syria, braving bullets while chanting “peaceful, peaceful.” In May 2011, these examples symbolized potential, and his cautious confidence seemed reasonable. Yet in the years to come, it was in these three places most of all — Libya, Egypt, and Syria — where the Arab Spring died. The story of how this happened has already produced a pile of books, but I can think of few better than Marc Lynch’s The New Arab Wars. Sober, insightful, self-critical, and at times searing, this book is one of the clearest accounts yet of the tangled mess of today’s Middle East. Lynch, a well-respected political scientist at George Washington University, was one of the scholars we would reach out to for insight on what was happening in the Arab world when I served in the Obama administration. With this book, he has given us not another scholarly tome, but an indispensable autopsy of the Arab Spring. It is also the best-informed and sharpest pushback to the Washington wisdom on the Middle East — what Obama has famously called the “playbook” — I have read. India needs the US on many counts in order to build up an optimal cyberspace ecosystem, bolster cyber security across sectors and most importantly, build critical infrastructure. Prime Minister Narendra Modi concluded his fourth visit to the US recently, one in which he had significant discussions on cyber security cooperation with President Barack Obama. This was possibly the last such meeting before Obama demits office later this year. These discussions endorsed a factsheet that should result in the signing of the framework for the US-India cyber relationship in the next two months. Not that India and the US have not engaged on cyber issues in the past; they started after the historic meeting between Prime Minister Atal Bihari Vajpayee and President George W. Bush in November 2001. However, engagement over the last 15 years has remained confined to statements of intent and some exchange programmes, with meaningful success being achieved only on the sensitisation of Indian law enforcement to the American way of dealing with cyber issues. A so-called spying incident involving officials from both sides derailed the process in the early years, but real momentum was regained since the Modi government took office in May 2014. While the US was definitely enthused about the announcement of the National Cyber Security Policy (NCSP) in May 2013 by India, nothing actually moved forward. Rather, nagging issues around the security of telecom equipments, mostly from that country, dogged the relationship then and India’s posture on internet governance mechanisms stood in the way of active engagement. When India proposed a 50-nation committee on Internet Related Policies for the management of cyberspace issues at the 66th session of UN General Assembly in October 2011, it came in direct opposition to a US-led effort that was aimed at fostering a multi-stakeholder approach. Over the next couple of years, the primary engagement between India and the US remained focussed on how this divergent approach could be bridged. Even the Indian government was conscious of the realities of the fast emerging internet ecosystem: the inherent impact of the internet economy was closely addressed by the Modi government. Based on the recommendations of a committee of three senior cabinet ministers, India finally announced its support to the multi-stakeholder model in August 2015. When Apple's iPhone sales started showing signs of stagnation recently, one of the most cited reasons for the slowdown was the iPhone's steep price tag compared to the competition. And while it's probably right to assume that a lower price would boost iPhone sales, the question is: why should the iPhone's price suddenly be more of a problem than it used to be? After all, it has always been more expensive than most Android phones, but that didn't stop Apple from outperforming the market many times over the past few years. First of all, the smartphone market has changed. In highly developed regions such as North America and Europe, the market is pretty much saturated - most people who want a smartphone, already have one. Much of today's smartphone growth comes from emerging markets, where price is much more of an issue. At the same time, the price gap between iPhones and Android phones has widened considerably over the years. As our chart, based on data from KPCB, illustrates, the difference in average selling prices grew from $218 in 2008 to $443 in 2016. While high-end Android phones, such as Samsung's top-of-the-line devices still cost about as much as an iPhone, there is a large number of manufacturers (mostly from China) who sell Android smartphones for a fraction of the iPhone's price these days. This chart compares the average selling price of iOS and Android smartphones since 2008. Yesterday was the 100th anniversary of U.S. Secretary of Defense Robert Strange McNamara’s birth. It is likely tempting to mark this week with a retrospective of contributions to U.S. policy during the Kennedy and Johnson administrations and the Vietnam War in particular. McNamara himself was deeply troubled by how the war there eventually spiraled out of control. His memoir from his time in the executive branch, In Retrospect, can be read as an exercise in absolution, in so far as it is permitted in public life. McNamara’s most lasting legacy, however, is not Vietnam. Rather, it is found in the introduction and cultivation of a body of knowledge that views national security problems, including that those related to conventional and nuclear war, as economic problems. The group of experts who specialized in this systematic quantification of national security issues – who almost overnight went from being policy analysts to policy makers under McNamara – also ushered in a radically different point of view on American nuclear strategy, a striking departure from the one advocated by the Eisenhower administration. President Kennedy knew I would bring to the military techniques of management from the business world, much as my Harvard colleagues and I had done as statistical control officers in the war.
2019-04-22T07:17:02Z
http://strategicstudyindia.blogspot.com/2016_06_11_archive.html
Economic growth in sub-Saharan Africa (SSA) in 2006 remained robust at 5.4 percent, after growth of 6 percent in 2004 and 2005 (Table 2.1). Growth moderated in oil-exporting countries because they had temporary difficulties in expanding oil production (Figure 2.1). Growth in the region was increasingly driven by domestic investment, rising productivity, and, to a lesser degree, government consumption, together more than offsetting the declining contribution from private consumption (Figure 2.2).1 Higher oil revenues and debt relief supported increased government spending. The recent uptick in investment augurs well for SSA’s future growth because it is broadly spread throughout the region in both OPCs and oil-importing countries. Sources: IMF, African Department database; and World Economic Outlook (WEO) database.Note: Data as of March 29, 2007. Arithmetic average of data for individual countries, weighted by GDP. 1 Defined on the basis of net oil exports; includes Angola, Cameroon, Chad, Republic of Congo, Côte d'Ivoire, Equatorial Guinea, Gabon, and Nigeria. 4 Includes the countries covered by the IMF African Department plus Djibouti, Mauritania, and Sudan. Source: IMF, African Department database. The global environment in 2006 was favorable for SSA. Healthy economic growth in most parts of the world raised demand for the region’s exports. Global demand for fuel and other commodities was particularly strong, and their prices rose through most of the year, boosting SSA’s terms of trade, especially for OPCs. Even for oil importers the terms of trade improved in aggregate, deteriorating in only about one-third of them. For many countries the depreciation of the U.S. dollar helped buffer rising oil prices. While SSA has profited from commodity booms in the past, they were followed by painful and protracted adjustment periods that wiped out most of the previous growth gains. This time, spending is boosted not only by higher commodity revenues but also by debt relief, but growth also seems to be supported by more prudent macroeconomic policies in most countries, making it more sustainable. The change is most obvious in resource-poor landlocked countries, where over the past three years growth has outperformed that of nonfuel-resource-intensive and coastal countries.2 As inflation has declined, it has increased the real resources available to the private sector, resulting in higher domestic savings. Growth in per capita income exceeded 3 percent in 2006 against 4 percent in the previous two years.3 The challenge now is to accelerate growth and spread it throughout the region to achieve the income poverty goal of the MDGs. At present only about half a dozen countries seem to be on track to meet it. The limited poverty data for 19 countries covering 1984 to 2004 shows that economic growth is a critical ingredient for reducing poverty (Box 2.1). Country evidence also suggests that growth needs to be supported by targeted distribution policies to make inroads into poverty.4 As a region, SSA is off-track on all the MDGs, although some countries are making rapid progress. Six of the seven top developing countries in expanding completion rates for primary education between 2000 and 2005 are in SSA, as are 5 of the 10 countries making the fastest progress toward improving access to clean water and sanitation. Sustained growth and effective distribution policies will be critical to whether SSA halves poverty by 2015. From household survey data from 19 SSA countries it appears that countries that sustained real per capita growth above 1 percent between surveys have reduced the share of their population living on less than a dollar a day—except in Botswana and Lesotho, where income inequality is the highest in the region.1 (figure). Since these surveys precede the recent improved growth in SSA, its effects on poverty are still unknown. National poverty data from a subsample indicate that urban poverty is more likely than is rural to fall with high per capita growth. In part, this may reflect problems in measuring both poverty and growth, given the overlap between the urban and formal sectors in many countries. A notable exception is Mozambique, where rural poverty has declined more than urban. While inequality has been reduced in almost half the countries in the sample, it has risen further in the others, some of which started off with already large inequalities. On the Gini index, Botswana, Lesotho, and South Africa have among the least egalitarian income distributions in the region.2 Analysts have attributed inequality in SSA to historical factors, persistent hierarchical sociopolitical structures, and ethnic fractionalization (Milanovic, 2003). In SSA countries with a high Gini index, the consumption share of the richest quintile has risen, and that of the poorest has fallen; the share of the middle class (fourth through sixth deciles) is among the lowest in the region, and it fell or held constant between surveys. This distributional pattern may have undermined the emergence of the sizable middle class needed to propel foreign and domestic investment in the region. Source: World Bank, PovCal Net database; World Development Indicators, 2006. Note: Change in Poverty is the fall/rise in the percentage of population living on less than a dollar a day. Growth is the average real per capita growth between survey years. Public policy could do more to address poverty and income distribution. The incidence of government in-kind transfers like health and education spending is particularly skewed in SSA. The richest quintile receives 32 percent of education spending, and the poorest just 13 percent (Chu, Davoodi, and Gupta, 2004; Davoodi, Tiongson, and Asawanuchit, 2003). In 2000, by directing social spending to the poor, South Africa was able to lower its before-tax, before-transfer Gini of 0.57 (among the highest in the world) to 0.35, a substantial improvement from 1993, when its social spending was seen as relatively neutral (South Africa, 2003). Note: This box was prepared by Smita Wagh.1 The analysis is based on household survey data released between 1984 and 2004. Data availability dictated the choice of countries and years for comparison (see Appendix I, Table A1 for countries and years covered), and the consistency and comparability of household surveys over time may not be reliable. Unless otherwise indicated, poverty data refer to the dollar-a-day poverty line; using national measures would have restricted the sample size.2 The Gini coefficient is calculated by dividing the area lying between the Lorenz curve (which plots cumulative income shares for a population) and a 45-degree diagonal by the total area lying under the 45-degree line. A value of 0 indicates complete equality; a value of 1, maximum inequality. Inflation in the region was subdued, thanks to prudent macroeconomic policies and another good harvest. In aggregate, inflation (excluding Zimbabwe) declined from 8.1 percent in 2005 to 7.2 percent in 2006 (Figure 2.3), even though in many countries high international oil prices were passed through to domestic buyers.5 Inflation declined strongly in OPCs, reflecting stabilization gains in both Angola and Nigeria. Nigeria also benefited from a good harvest, as did many other SSA countries, which eased the food supply. With few exceptions bank financing of the budget deficit was negligible, and monetary policy responded early to inflationary pressures in a number of countries. Global demand, especially for commodities, helped strengthen the external position of many SSA countries. Oil exporter revenues from rising prices more than offset a slowdown in output (Figure 2.4). Rising nonfuel commodity prices counterbalanced the impact of high fuel prices on oil importers; in fact, in 2006 their terms of trade improved by 6 percent. Developments in late 2006 were particularly favorable for nonfuel commodity exporters because their export prices remained relatively strong while oil prices declined. SSA’s current account (including grants) was broadly in balance. With strong capital inflows (see below), the reserve position of SSA countries improved markedly; even oil importers (excluding South Africa) on average managed to raise their import cover slightly to 4.5 months of imports (Figure 2.5). Sources: IMF, Commodity Prices; and UN Comtrade. 1 Composite of cocoa, coffee, sugar, tea, and wood, weighted by SSA exports. Source: IMF, African Department database. 1 Excluding South Africa. With the global commodity boom, Asia and the United States have emerged as major trading partners for SSA (Figure 2.6). While the European Union is still the dominant trading partner for most SSA countries, rising exports of fuels and other commodities to destinations outside the European Union and rising textile exports to the United States generated by its AGOA have changed the pattern of SSA exports. Chapter IV presents an analysis of how SSA’s exports are evolving and discusses policies that are essential to further integrate the region into the global economy. Communication and information technologies can help SSA countries lower their costs and increase productivity. The rising use of cell phones, for example, facilitates access to financial services and a deepening of financial markets. While access to these technologies in SSA is still well below the world average, the region is slowly catching up (Box 2.2). Note: 2006 data to October. The spread of information and communication technologies has enabled or contributed to the transformation of the global economy, characterized by shifts in the location of economic activities, increased fragmentation of production processes, and emergence of some new types of trade, most notably in services. While access to communication services is relatively low in Africa compared with other regions, Africa is rapidly narrowing the gap—it is one of the fastest-growing markets for cellular phone services globally—which is creating opportunities for both trade and domestic businesses. Information technology (IT)-related capital deepening is estimated to have accelerated growth of GDP in SSA by about 0.2 percentage points annually, up to an additional 3 percent of GDP cumulatively over 1991-2005. This is about half the growth impact of IT investment in OECD countries. There has been a substantial improvement in access to communication services in SSA over the past ten years (figure). Most notably, cell phone subscriptions grew 60 percent annually between 1994 and 2004, while mainline services grew moderately at only 6 percent annually. As a result access to communication services in SSA has almost doubled (to 19 percent) relative to the global average between 1991 and 2004. The number of Internet users has also grown briskly, to 14 percent of the global average. Source: Author's calculations, based on data from the International Telecommunications Union. While GDP per capita appears to be a primary determinant of access to information and communication technologies, domestic policies and regulation also have an effect. For example, the quality of the regulatory environment matters, and greater competition among service providers lowers prices and translates into higher coverage of cell phone services. A more competitive market for cell phones is also associated with lower prices for mainline services. Note: This box was prepared by Markus Haacker; see also Haacker (2007a and 2007b). The scaling up of aid promised at the Gleneagles Economic Summit is yet to materialize. Official grants to SSA (excluding Nigeria and South Africa) were broadly unchanged in 2006 at 2.7 percent of GDP—about the average for the past decade. Rising grants due to the MDRI were offset by lower disbursements to Ethiopia, Rwanda, Niger, Burundi, and some of the mature stabilizers, notably Uganda and Tanzania.8 While total bilateral official development assistance (ODA) to SSA rose by more than 30 percent in real terms in 2005, the increase was almost entirely due to the Paris Club agreement with Nigeria and a moderate increase in emergency assistance. Without the debt relief to Nigeria, ODA flows to SSA have been basically flat since 2003 (Figure 2.8). In contrast, emerging creditors such as China have increased their financial assistance to SSA, be it in the form of loans, grants, debt relief, or direct investment. The additional resources are welcome as part of scaled-up assistance from the international community. It will be important that the terms and volume of this assistance be compatible with preserving debt sustainability over the longer term, be provided in a transparent manner, and be aligned with national priorities of the receiving countries as formulated in their Poverty Reduction Strategy Papers (PRSPs). Source: OECD, DAC-ODA. Data as of January 15, 2007. Rising oil revenue and capital inflows drove appreciation of the real exchange rates in many countries. Oil revenues pushed the real exchange rate in oil-exporting countries up by over 3 percent by the end of 2006 (Figure 2.9). In oil-importing countries, two developments stand out: the depreciation in South Africa by 14½ percent and, to a smaller degree, in Namibia; and the real appreciation of 250 percent in Zimbabwe. Excluding both South Africa and Zimbabwe, real exchange rates for oil importers in the aggregate were stable. In Ethiopia, Kenya, and Madagascar, among others, real exchange rates appreciated by more than 5 percent. Except in Kenya this was a result of factors like scaled-up expenditures, aid, and high nonfuel commodity exports; in Kenya appreciation was driven by increased tourism receipts, remittances, and capital inflows. 1 Excluding South Africa and Zimbabwe. Investor interest has been boosted by the region’s growth performance, the commodity boom, and comprehensive debt relief. Net foreign direct investment (FDI) (excluding South Africa) almost doubled since 2002, reaching $18 billion in 2006.10 Resource intensive countries, mainly oil-producers, attracted almost four-fifths of FDI, but non-resource-intensive countries (excluding South Africa) also recorded rising inflows. Investor interest is also evident from the activity of hedge funds and institutional investors in local currency debt markets. Nigeria received inflows of about $1 billion in the first half of 2006, and there have been significant inflows into Ghana, Kenya, Uganda, and Zambia (Figure 2.10). Investors have been attracted by high yields relative to the perceived risk, better macroeconomic fundamentals, and diversification benefits.11 While the improved investor sentiment is supported by rising ratings for sovereign SSA debt (Box 2.3), most SSA countries still need substantial structural reforms, including strengthening their institutional framework, to develop functioning debt markets and to improve their capacity to manage domestic debt (Chapter V). Source: EMTA; and IMF, Financial Market Update. The number of SSA countries rated by international rating agencies has grown in recent years. Financed by two donor-led initiatives, in 2006 Standard and Poor’s (S&P) rated 14 countries, including South Africa (first figure), while Fitch rated 12 countries by end-September 2006 (second figure).1 The sovereign ratings are based on such considerations as external and domestic indebtedness; sustainability of macroeconomic policies; the degree of development; financial sector and political stability; transparency in government operations; and the quality of domestic institutions. The median rating of countries in SSA, excluding South Africa, is B. This is far below investment grade (see the figures). Fitch also rates two monetary zones (WAEMU and CEMAC), which peg their currency to the euro, at BBB-, i.e., at the investment grade level, mainly because of the support from France embodied in the zone arrangements. 1 The number of countries rated are indicated in the parenthesis below each year. Source: Standard and Poor's, 'Report Card: African Sovereigns', April 25, 2006. Rating agencies view HIPC and MDRI debt relief as voluntary debt restructuring, not selective default, because it is not initiated by debtors.2 These initiatives have lowered external debt in the relevant countries to levels found in peer B-rated or even BB- rated countries. However, the debt relief itself has not resulted in immediate upgrades in ratings, albeit it has improved the outlook for some countries. For these, it does present an opportunity to access international capital markets (Ghana, Senegal, and to lesser extent Benin, Burkina Faso, Mozambique, and Mali) so long as the borrowed funds are growth-enhancing and debt sustainability is ensured.3 The other countries must continue to rely on concessional financing even after the MDRI. But even for the good performers, rating agencies warn that borrowing strategies must be cautious and supported by continued sound macroeconomic policies so that the benefits of hard-won debt sustainability are not squandered. Note: This box was prepared by Piroska M. Nagy.1 First ratings of SSA sovereigns were sponsored by United Nations Development Program (UNDP) and U.S. Agency for International Development (USAID).2 See Fitch Ratings, Republic of Ghana (April 25, 2006), and Sub-Saharan Africa – 2006 Outlook (May 2006).3 See Standard & Poor’s, Filling the Funding Gap for African Sovereigns After Debt Relief (April 2006), and Report Card: African Sovereign (April 2006). Food security has improved as the result of another good harvest in 2006. It is estimated that cereal production in Africa increased in the 2006 agricultural season, with bumper harvests in several West and Southern African countries. But severe floods and outbreaks of disease are threatening food security in East Africa, in particular in parts of Ethiopia, Kenya, and Uganda. Conflict and refugee movements are jeopardizing food security in Chad and the Central African Republic. In Zimbabwe high inflation, foreign exchange shortages, and poor agricultural policies—in particular insecurity in land tenure and distorted pricing—are undermining food security, especially in rural areas. Overall, some 18 million people in SSA are considered to be at risk of starvation. There is also growing recognition that climate change due to the emission of greenhouse gases could precipitate more floods and drought in SSA (Box 2.4). Scientists agree that global warming has already begun and will continue for centuries (Intergovernmental Panel on Climate Change, 2001). There is strong evidence that the earth’s temperature has been rising in recent decades as concentrations of greenhouse gases (GGs) increased in the atmosphere. This is likely to have implications for global climate patterns, with potentially severe negative consequences for human life and economic activity (Stern, 2006). Addressing the challenges posed by climate change will necessitate a concerted and costly effort (Stern, 2006). The required response comprises mitigation (reduction in GG emissions) and adaptation (dealing with the consequences of climate change). Forceful mitigation by high-income and emerging economies—the principal emitters of GGs—is essential. Adaptation, on the other hand, is the chief concern for sub-Saharan Africa (SSA), the region that contributes the least to GG emissions but that is uniquely vulnerable because it is already hot and under substantial environmental stress (Rice, 2006; United Nations, 2006b). There is a case for additional aid flows to the region to compensate it for the deleterious effects of climate change for which it has not been responsible. The impact of climate change on SSA could be dramatic (Nkomo, Nyong, and Kulindwa 2006; United Nations, 2006a). Declining and more variable rainfall could jeopardize already-scarce water resources so that by 2025 the number of people short of water on the continent of Africa could increase by 60 percent, to 480 million. Rising temperatures and more frequent floods are likely to increase the incidence of diseases like malaria. Agricultural production in rain-fed areas is likely to be affected; certain activities, such as coffee growing in Uganda or nomadic livestock husbandry in Kenya (Beaumont, 2006), might be completely wiped out. Rising sea levels could threaten both important agricultural areas and coastal communities, including major commercial centers like Cape Town, Dar Es Salaam, and Lagos. Subsistence farmers and other poor people are likely to bear the brunt of the adverse impact of climate change. Worst of all, perhaps, competition for scarce resources could exacerbate conflict in the region. Climate change could present governments in the region with macroeconomic challenges. Inflationary pressures could surface as the supply of domestically produced food falls and budgets must devote increasing resources to preparing for climate contingencies. If governments import more—because, for example, persistent droughts mean more foodstuffs must be imported—the balance of payments could deteriorate. If, in contrast, they spend more on domestically produced goods and services—because, for example, rising sea levels require labor-intensive infrastructure upgrades—the real exchange rate would appreciate, undermining competitiveness. While separate estimates for SSA are not available, for the developing countries as a whole the annual costs of adapting to climate change could amount to tens of billions of dollars (Stern, 2006). Mitigation by high-income and emerging economies could partially alleviate these challenges, by both reducing GG emissions and supporting the transfer of resources to SSA. The latter could be via trade in emissions caps and/or investment in SSA’s emission-reducing projects under the Clean Development Mechanism. Allocate resources to critical projects to, for example, improve water supplies, build up coastal defenses, upgrade roads and bridges to enable them to withstand more extreme weather events, and invest in health and education. Note: This box was prepared by Dmitry Gershenson. HIV prevalence rates seem to be plateauing at a high level in most SSA countries.12 The number of people infected with HIV/AIDS in SSA rose to 24.7 million in 2006, almost two-thirds of the global total and up 2.3 percent from 2004. However, because the population also grew, the prevalence rate for adults aged 15 to 49 edged down from 6.0 percent in 2004 to 5.9 percent in 2006. Though antiretroviral treatment reached over 1 million people in mid-2006, a ten-fold increase since the end of 2003, only one-fourth of those in need of the therapy actually receive it. HIV-related mortality in the region is still rising, and the number of orphans is growing rapidly, reaching 12 million in 2005.13 The HIV pandemic continues to impose a heavy social and economic burden on the region, undermining efforts to reduce poverty and make progress toward the MDGs. Economic growth in SSA oil-exporting countries slowed in 2006 (Figure 2.11). Overall growth dropped to 5½ percent in 2006, though non-oil GDP growth rose strongly. In Nigeria unrest in the Niger delta hindered oil production and caused GDP growth to slow, despite strong growth in the non-oil sector. In Angola a delay in the coming on-stream of new oil fields lowered growth to 15 percent. The maturing of its largest oil fields caused a steep decline in Equatorial Guinea’s growth rate; and in Chad GDP growth dropped because technical difficulties slowed oil production, and the completion of the Cameroon-Chad oil pipeline reduced non-oil sector growth. In Côte d’Ivoire, growth again stagnated. Cameroon was the only OPC where growth accelerated. Growth was supported by investment and private consumption. While the former picked up somewhat, private consumption lost steam. Similarly, government consumption, which had been slightly expansionary in 2005, slipped to neutral. The contribution of net exports to real growth was again negative. Non-oil growth in oil exporters picked up markedly, to 10 percent, higher than their overall growth rate. In Angola, Equatorial Guinea, Nigeria, Chad, and Gabon, strong non-oil activity partially offset the slowdown in the oil sector, indicating that these countries are making progress in diversifying their economies. But in Cameroon, the Republic of Congo, and Côte d’Ivoire, non-oil growth was below that in the oil sector. Inflation in oil-exporting countries as a group dropped to 7¾ percent, reaching single digits for the first time since 1990. This reflects mainly strong stabilization gains in Nigeria and Angola, which have both sought to sterilize surging oil revenues. As noted earlier, Nigeria also benefited from falling food prices as a result of a good harvest. In the other oil-exporting countries, inflation continued at the benign levels recorded in recent years. However, strong fiscal demand and delayed pass-through of higher oil prices along with high meat prices in Chad and Gabon has pushed inflation in CEMAC OPCs well above that in WAEMU countries. Once again the fiscal position of OPCs improved. In the aggregate they posted an overall fiscal surplus (excluding grants) of 8 percent of GDP, with Equatorial Guinea and Gabon exceeding 10 percent, and the Republic of Congo exceeding 20 percent of GDP. Bringing the average down were Chad, which recorded a deficit of 2 percent of GDP because of exceptional security expenditures, and Côte d’Ivoire, whose fiscal deficit remained at 3 percent of GDP. While improvements in the overall balance are impressive, this measure covers up vulnerabilities of the budget to fluctuations in oil prices that are better captured by the ratio of non-oil deficit to non-oil GDP. Excluding Equatorial Guinea, which is an outlier, this ratio on average improved slightly, to 27¾ percent, although with large variations between countries (Appendix I, Table A2).14 In half of the SSA oil exporters the ratio exceeded 40 percent. In general, improvement in the fiscal position is mainly a result of increased oil revenue coupled with a comparatively moderate increase in expenditures—a confluence that has prevailed over the past four years (Figure 2.12). On average over 2002–06, a 1 percent increase in fiscal oil revenue was associated with a 0.3 percent increase in fiscal spending. However, this ratio increased in 2006 (see Chapter III). Saved oil revenue bolstered the external position of SSA oil exporters. The surplus in their external current account (excluding grants) increased further, to 8¾ percent of GDP. On average for 2002–06, a 1 percent increase in oil export revenue was associated with a 0.8 percent increase in imports (Figure 2.13), mainly because of continuing investment in oil exploration, production, and infrastructure. Since 2002, OPCs have accumulated additional foreign exchange reserves of almost $45 billion, fed by a cumulative current account surplus of $35 billion and healthy inflows of FDI (Figure 2.14). The macroeconomic policies of oil exporters continued to be broadly sound in 2006. While scaling up their spending on social and infrastructure needs, they have recognized that their absorptive and implementation capacity is limited and saved a large portion of the windfall profits from higher oil revenue. The resulting improvement in their fiscal and external position has made them more resilient to sudden declines in oil prices. Monetary policy was tightened somewhat by limiting credit to the private sector to accommodate an expansionary fiscal policy, and will need to remain vigilant to second-round effects of high oil prices. In the fiscal area, fuel price subsidies should be further reduced, and some countries will need to address emerging weaknesses in their fiscal controls—such as off-budget spending and nonconcessional borrowing—and strengthen efforts to ensure the quality of spending. While pressing development needs have expanded the role of the government in the economy, these countries must improve the business environment to facilitate private sector growth. OPCs also need to implement structural reforms to create jobs, diversify their economies, and expand their absorptive capacity. In comparison with oil producers in other regions, those in SSA suffer from a relatively poor business environment. Except for Côte d’Ivoire and Nigeria, their financial sectors are far less developed than those of OPCs elsewhere (seeFigure 3.6 in Chapter III) and even of most other SSA countries. Except for Nigeria OPCs in SSA also have a much more rigid labor market than other oil producers (Figure 2.15). Source: World Bank, Doing Business, 2006. 1 GCC=Gulf Cooperation Council members. Growth in oil-importing SSA countries proved resilient in 2006. Aggregate growth in this group was 5¼ percent, almost unchanged from 2005. It was supported by high nonfuel commodity prices, a good agricultural season, and rising investment. Consumption was again the biggest contributor to growth, although less so than previously (Figure 2.16). Meanwhile, investment picked up markedly. The expansion of government consumption and investment reflects the stepped-up efforts of many countries in the region to attain the MDGs; related to this, real growth of imports again outstripped growth of exports. Half the countries in this group recorded GDP growth rates of 5 percent or more, including Ethiopia, Liberia, Malawi, Mozambique, São Tomé and Príncipe, and Sierra Leone where growth was buoyant. In South Africa, by far the largest economy in SSA, growth was roughly unchanged at 5 percent. Zimbabwe was the only country in SSA where real GDP declined. Inflation in general was kept under control. Despite pressures from high oil prices, inflation was broadly unchanged at 7 percent (excluding Zimbabwe) as a result of prudent macroeconomic policies and good 2005/06 harvests in many countries. However, price pressures increased in over one-third of oil importers. Strong domestic and foreign demand, rising oil prices, and the depreciation of the rand pushed up inflation in South Africa, where monetary policy responded quickly to contain price pressures. In Ethiopia, Guinea, and São Tomé and Príncipe inflationary pressures resulted from fuel price increases and an expansionary monetary policy. Inflation in Zimbabwe again accelerated, to above 1,000 percent by the end of 2006. Swift adjustment of domestic fuel prices helped safeguard the fiscal position of oil importers. Increased efforts to mobilize revenue also supported higher spending on critical programs. The decline of the U.S. dollar and slight real appreciation, especially in the CFA zone, dampened the impact of high oil prices and reduced the cost of imports. Public spending on poverty reduction expanded the fiscal deficit excluding grants. In contrast, the fiscal balance including grants recorded a surplus of 1 percent of GDP, mainly because of MDRI relief. Fiscal spending by oil importers expanded by 0.5 percent of GDP as many countries stepped up their spending on poverty reduction. The overall revenue ratio increased by 0.5 percentage point. Landlocked and costal countries made serious efforts to increase revenue, which rose by 0.6 percent of GDP, whereas in non-oil resource-intensive countries the revenue ratio fell by 1 percentage point. The external position improved slightly despite higher fuel prices. While imports stagnated in SSA oil importers (excluding South Africa), an increase in their exports by ¾ of 1 percentage point, to 31 percent of GDP, helped pay the higher oil bill. The current account deficit (excluding South Africa) narrowed slightly and foreign exchange reserves edged up to 4.5 months of imports, compared with 4.3 months in 2005 and 5.6 months in 2003. In South Africa strong domestic demand led import growth to exceed the brisk growth in exports, widening the current account deficit from 4 percent in 2005 to 6½ percent of GDP. Nonetheless, reserves increased slightly. SSA economic growth is forecast to accelerate to 6¾ percent in 2007 (Figure 2.17). A renewed rise in oil production is expected to boost growth for OPCs to more than 10 percent; in Angola growth is expected to exceed 30 percent. However, with a fall in oil prices the magnitude of fiscal and external balances is likely to be smaller. In oil importers, robust demand for nonfuel commodity exports and a positive outlook for the agricultural season are expected to keep growth steady at about 5 percent. Growth is expected to broaden further in this group, with real GDP expected to expand by 5 percent or more in almost two-thirds of those countries. Inflation is expected to remain stable at 7 percent for the region as a whole (excluding Zimbabwe). In OPCs, further stabilization is again expected in Angola, Cameroon, and Chad, while inflation in most other oil exporters is forecast to hover around the benign levels prevailing in their currency unions. In oil importers (excluding Zimbabwe), lower fuel prices, an improved food situation, and vigilant macroeconomic policies should help contain inflation to about 7 percent. SSA countries in general have made impressive progress in recent years in bringing inflation down to single digits, with a vast majority (35 of 44 countries) expected to be in that group in 2007 (Figure 2.18). Unfortunately, in Zimbabwe inflation is projected to accelerate toward 3,000 percent. Fiscal and external balances (excluding grants) are likely to come under pressure, with lower prices for oil and other commodities. The overall fiscal balance for SSA (excluding grants) is projected to drop into negative territory again, with a small deficit of about ¼ percent of GDP, driven by declining balances in oil exporters and further efforts to reach the MDGs. The terms of trade for SSA as a whole are expected to worsen by 5 percent, with OPCs facing a drop of 10 percent and a reduction in their current account surplus. In contrast, the current account deficit of oil importers is expected to be stable and the terms of trade to deteriorate by only 1½ percent. SSA’s reserve position (excluding South Africa) should improve further, to almost 7 months of imports, reflecting the rising reserves of oil exporters and stable reserves of oil importers. Preserving recent stabilization gains and broadening growth will require vigilance and continued reform. Lower fuel prices are giving some respite and should help to further consolidate macroeconomic stabilization gains. On the other hand, expectations of the people are very high in many African countries because of debt relief and the promised scaling-up of aid flows. Many governments in the region face pressures for increased outlays on infrastructure, in the social sectors, and for rural development. The development needs are genuine, but policymakers will have to manage the expectations well if they are to preserve macroeconomic stability. To effectively absorb higher aid, fiscal and monetary policies need to be well coordinated. Domestic absorption can be raised by liberalizing trade. Reforms to increase the domestic supply response to higher aid flows and enhance productivity should be pursued. Here the role of the private sector is critical, but the stagnation of credit to private nonbank businesses indicates that more needs to be done if the private sector is to be the engine of growth (Box 2.5). Structural reforms to enhance the business climate and investment in key infrastructure would further increase the region’s growth potential. While still at the bottom in the World Bank’s Doing Business survey for 2007, SSA for the first time was among the top three reforming regions. Ghana and Tanzania were among the top 10 reforming countries, and Nigeria and Rwanda were among a select group that were implementing three or more reforms. The challenge is for more countries to adopt reforms to reduce the cost of doing business. A dynamic private sector is essential for raising SSA’s growth rates, reducing poverty, and integrating Africa into the global economy. Currently entrepreneurs in SSA face more regulatory obstacles than in any other region of the world. The World Bank’s Doing Business 2007 report ranked 175 countries on ease of doing business; the average SSA country rank was 131. Obstacles span the range of private sector activity, from licensing through employment and credit to administrative transactions. For instance, it takes about 11 procedures and 2 months to start a business in SSA compared with 8 procedures and 1 month in South Asia, and it costs three times as much in terms of income per capita. Labor market regulations in SSA are among the most rigid in the world; they undermine private sector development, weaken external competitiveness, and discourage foreign investors. Recently reform of the business environment has picked up. After lagging behind for years, two-thirds of SSA countries implemented at least one positive reform in 2005/06. Only Eastern Europe, Central Asia, and OECD high-income countries did better. While these reforms were relatively easy (“stroke of a pen”), a broader agenda remains pressing, such as reforming the financial sector and streamlining and enhancing the transparency of the legal and administrative system. In its core area of responsibility, the IMF helps members achieve and maintain macroeconomic stability, which is a precondition for private sector development. In doing so, policies have to be targeted to bring about sufficient public expenditures for human and physical infrastructure, adequate credit to the private sector, and tax systems free of distortions. The IMF continues to emphasize the importance of good governance. giving central banks more autonomy and increasing their reliance on indirect rather than direct monetary policy instruments. The development of functioning financial markets would be a next step that could help reduce outflows of portfolio capital from SSA and attract inflows. Other measures to broaden access to credit are to facilitate the recovery of collateral, expand microfinance and rural credit, and provide working capital and long-term financing for smaller firms. Reducing internal barriers would increase intraregional trade and make SSA more attractive for foreign and domestic investors. Expanding and improving infrastructure would help reduce the high shipping costs that impede trade within Africa and limit the scope for economies of scale. Reducing nontariff barriers, such as quantitative restrictions, import bans, roadblocks, and high administrative charges, would also foster trade. Complicated and restrictive rules of origin under various regional trade arrangements should be simplified, and the efficiency and governance of customs administrations need to be strengthened to promote intra-and extraregional trade. Finally, countries should seek to collaborate with private investors. The IMF is supporting national investors councils that bring together African leaders and local and foreign business executives to identify investment opportunities, obstacles to private investment, and options for removing the latter. Note: This box was prepared by Ulrich Jacoby. The outlook for SSA in 2007 is positive, and the risks seem moderate and manageable. Moreover, the policies of most countries support recent stabilization gains. The primary downside risks are the pace of slowdown in the global economy and how it will affect oil and other commodity prices, interest rates, and private investor sentiment. Economic growth in SSA in 2007 could be negatively affected if an abrupt unanticipated slowdown in the United States spilled over into the global economy. Short of that, correlations between the U.S. economy and SSA have historically been weak, although trade linkages have expanded in recent years.15 There is evidence of a stronger correlation between SSA and Europe (Figure 2.19 and 2.20).16 Thus, any economic slowdown in the euro area could have a more significant impact on growth in SSA. 1 Periods of U.S. recessions shaded (National Bureau of Economic Research). While the co-movement of growth in SSA and in Asia has traditionally been weak, this is changing as the two regions become more integrated. For example, Asia now receives about 25 percent of SSA’s exports, twice as much as a decade ago. China and India together account for about 10 percent of both SSA’s exports and imports. They are also making substantial investments in SSA. If the slowdown in the world economy is worse than expected, it could hurt commodity prices. Real commodity prices, both oil and nonoil, have risen steadily since the early 2000s—to SSA’s benefit. While in 2006 oil-exporting countries in SSA saw another dramatic improvement in their terms of trade, those of oil importers also increased, by 6 percent, because nonfuel commodity export prices rose significantly. A sudden decline in commodity prices because of a sharp downturn in the global economy is a risk to commodity exporters in the region. Higher-than-envisioned oil and commodity prices would hurt net commodity importers like Malawi, Madagascar, Senegal, and Uganda, which would suffer significant GDP losses.17 Accumulating reserves, adopting flexible exchange rate policies, and fully passing through oil price increases are policy responses that will help protect these economies against unanticipated price increases. A reversal of portfolio flows would pose major macroeconomic challenges for a few SSA countries. In the search for high yields, some SSA countries are attracting increased inflows of portfolio capital. A sudden shift in investor sentiment could cause these flows to reverse. SSA countries should continue to strengthen public debt management and enhance supervision of the banking system to track capital flows and the repayment schedules on government securities held by nonresidents. Monetary and exchange rate policies will need to be sufficiently flexible to respond to volatile movements in capital. A number of security and political risks currently face the region. Chief among these is the continuing crisis in the Darfur region of Sudan, the current conflict engulfing Ethiopia and Somalia, the political problems affecting Côte d’Ivoire and Guinea, and fragilities remaining after the recent elections in the Democratic Republic of Congo. The recurring disruptions to oil production in the Niger delta pose an economic risk to Nigeria, which also faces the risk of an election-year relaxation in policies. Management of the security risks will in most cases require action by the African Union and the international community. Note: This chapter was prepared by Sanjeev Gupta, Ulrich Jacoby, and Calvin McDonald. After negative growth in Africa in the first half of the 1990s, productivity turned positive after 1996 and has been steadily accelerating since 2000. While still below gains in most other developing regions, productivity reached almost 3 percent in 2006 (IMF, 2007, Figure 1.14). An earlier study found that total factor productivity growth (TFP) improved strongly in the second half of the 1990s for the first time since the 1960s, and that growth accelerations were accompanied by strong productivity growth. The improvements were closely associated with sound macroeconomic policies, growth in trade, and institutional improvements. See IMF (2005, Chapter IV). In addition to countries being classified as oil importers or exporters, they were also classified as resource-intensive, with subgroups oil and non-oil; and nonresource-intensive, with subgroups coastal and landlocked. These groupings follow Collier and O’Connell (2006), who show that the effect of resource endowments is independent of location and thus classify all SSA economies by endowment and location. A country is classified as resource-intensive if primary commodity rents, that is, revenue minus extraction costs, exceed 10 percent of GDP (on this criterion South Africa is not resource-intensive). In terms of location, countries are classified by whether they have ocean access (coastal) or are landlocked. A country is classified as landlocked if its access to the sea is limited and is likely to be a significant impediment to trade; hence, the Democratic Republic of the Congo is classified as landlocked. For further details, see the section on Data and Conventions in the Statistical Appendix. Volume GDP underestimates the increase in real incomes and purchasing power that may be induced by changes in the terms of trade, The command GDP indicator adjusts for these by deflating exports with the import price deflator, a measure of how terms-of-trade shifts affect a country’s purchasing power—i.e., its ability to command goods and services. On the basis of command GDP, per capita income growth in SSA averaged about 8.5 percent annually for 2004-06, reflecting the large terms-of-trade gains of oil exporters. However, those gains are probably overstated because a portion of revenues from the oil exports accrue to foreign oil companies. For oil importers, the corresponding rates are close to annual per capita GDP growth. The Global Monitoring Report 2007 (World Bank and IMF, 2007, forthcoming) finds that although between 1999 and 2004, poverty in SSA was reduced by almost 5 percentage points, to about 41 percent, due to population growth the number of people living below the poverty line was unchanged at about 300 million. The second tranche of the October 2005 Paris Club agreement with Nigeria was implemented in May 2006. About 60 percent of Nigeria’s debt to Paris Club creditors has been canceled as part of the agreement, Nigeria also cleared arrears and repaid early a substantial portion of outstanding debt. They are Burundi, Chad, the Democratic Republic of the Congo, The Gambia, Guinea, Guinea-Bissau, Republic of Congo, and São Tomé and Príncipe. Another six SSA countries (Central African Republic, Comoros, Côte d’Ivoire, Eritrea, Liberia, and Togo) met the income and indebtedness criteria of the enhanced HIPC Initiative (based on end-2004 data) and may wish to be considered for debt relief. Since MDRI relief is reflected in both grants and a reduction of scheduled debt service, the grant data shown in Table SA20 of the Statistical Appendix do not fully capture it. In addition, classification in the fiscal and external accounts varies by country depending on the classification system (government finance statistics and balance of payment statistics), accrual or cash budgeting, and the arrangements between central bank accounts and the budget for transfer of the IMF’s MDRI relief. See also Gupta (2006), Chapter 2 on managing the real exchange rate. A recent study by the Multilateral Investment Guarantee Agency (MIGA) benchmarked FDI competitiveness in nine SSA economies. While FDI partners cited various advantages to operating in SSA, such as preferential trade access, climate, access to regional markets, and low labor costs, they also reported significant problems including reliability of supply and cost of key utilities, scarcity of skilled labor, and cumbersome business procedures (MIGA, 2006). Settlement costs for SSA securities have also been declining because increasingly they can be settled on standard international trading platforms. If not stated otherwise, all data are from UNAIDS/WHO (2006). In some countries, the improvement in the non-oil deficit was partially due to their inability to expand spending rapidly. Factors that determine co-movements include trade linkages, sectoral linkages, and financial integration. The median real growth correlation between SSA and the United States between 1971 and 2005 is not significant (estimated at .08), but the correlation was stronger for some countries during the 1991–2005 period. See IMF (2006), Chapter IV.
2019-04-19T14:43:35Z
https://www.elibrary.imf.org/view/IMF086/05635-9781589066397/05635-9781589066397/C2.xml?print
India, as can be expected from an ancient culture, has a long astronomical tradition. in India came about as a result of interaction with Greece in the post-Alexandrian period. born in AD 476 and completed his influential work, Aryabhatiya 1 , in AD 499. the mathematical equations that arose in the process. a sundial, and an armillary sphere. values are different from, and in fact less accurate than, the values of Hipparchus (d. ca. solar eclipse from two stations on the same meridian. the (divine) Rig Veda, that is as terse shlokas composed in the rigid framework of metre. arguments, let alone for observations that were in any case not considered to be important. The introduction of the Arabs to astronomy came from translation of Indian texts. astronomy, rather than dabble in planetary calculations. on astrolabe, or yantraraja 3. small for fine measurements, etc. names like Raj vilas, 3ai vilas or Lakshmi vilas to their palaces. a couple of arcminutes 3,4. even though chronologically he lived in the modem age of astronomy. It is interesting to note that telescopes were used from India in the 17th century itself. The earliest use appears to have been in 1651, barely 40 years after its use by Galileo. a historical curiosity 6. Shakerley also observed a comet in 1652. colonial compulsions to learn about India’s geography. it was not of much use to him. These telescopic swallows however did not make an Indian astronomical summer. when it was pressed into service as a geographical aid. amusement astronomical observations for the determination of latitudes and longitudes. to Emperors, Cornell Univ. Press, Ithaca & London, 1986). seen at the original site. • . :: ‘:,’_ ,. Madras longitude continued to be used in the official maps for about 100 years, till 1905. the Royal Society of London). was gradually built up by purchases from England or from within the country 7. and astronomical quadrants for the observation of the 1769 transit from various places. the longitude of Pondicherry with respect to Greenwich and Paris. in 1777 by (vii) a triple-glass Dollond refractor with a double-glass micrometer. which when the micrometer was used, the telescope was fixed’. DoUond’s achromatic telescope Rs 360. A sicca rupee was a new rupee; after two or three years of use, it was at a small discount. by the Company 7, so that individual efforts at Calcutta did not have any cumulative effect. winter months December – April could require 4-6 weeks at other times. and without safe landing for the Indiamen, which therefore were often wrecked. Company during the 18th century’. Prince of Wales Island (Penang, Malaysia) where he died on 1816 October 27. and immediately became India’s Greenwich. pagoda = 3½ rupees = 8 shillings]. it could be removed and rebuilt. An unsigned copy remained at Madras, and was prefixed to Goldingham’s 1793 observations. Phillimore’s Historical Records of Survey of India 7. of Venus expedition, it has been at Kodaikanal since 1899 and in use ‘9. iii. A one-ft diameter quadrant by John Bird is. In 1793 the Company purchased the following for Madras Observatory 17. iv. A circular astronomical instrument of 16 in. diameter by Troughton. of Jupiter – by Dollond. Two were retained at the Observatory, and rest distributed. Ramsden or Stancliffe, according to his own modifications. These were not sanctioned 2°. The instruments collected at the Observatory under Topping included 21 . vi. Astronomical quadrant by Martin. vii. Astronomical clock by Monk. viii. Pocket chronometers in silver cases by Arnold, Nos 378, 391,393, 397. from John Goldingham who was proceeding on a long leave. at the Great Trigonometrical Survey of India ( see section 5). Observatory or leave them there when no longer required by them. from the India Office and the Royal Society. with two graduated circles and a long axis moving on a graduated arc. paid for by the Company (£500). Its defective objective was replaced in 1852 by the maker with a new one of 6 in. to as ‘the mutiny’ or ‘a war of independence’. expert mechanic (F.Doderet) became available. successor Charles Michie Smith). The damaged instrument is now at Bangalore. proving that history is a luxury poorly-equipped observatories can ill afford. met the fate of the observatories at Lucknow and Trivandrum (see sections 6 and 7). vast territory in south India. Its control now extended from the east coast to the west. Ensign Henry Kater of the 12th Foot (later FRS). from Dr James Dinwiddie (d.1815) a lecture of science at Calcutta, paying him Rs 3600. Lambton found them in ‘a wretched state’ and had to put them in working order. Munro [Madras Governor] ’36. It was used, after modifications, till 1846. after 6 weeks of hard labour. to fall back upon 38. In 1830 the survey got some new instruments and more importantly a repair workshop. Figure 8. The title page of Taylor’s celebrated Madras catalogue. Smith in 1851. The original is at the Royal Observatory Edinburgh (I thank Mary T. Briick for sending me the photograph). than, Airy’s Transit at Greenwich, it was used at Madras for 25 years 1862-87. • , ,2.m~r~i .. . discoverer of a variable star, and the first Indian Fellow of the Royal Astronomical Society. building that John Evershed in 1909 discovered the Effect named after him. manufactory and supplied instruments to the GTS. same official designation,if not the salary. Mohsin was given a monthly salary of Rs.250. taken a leading place even among Europcan instrument makers’. made a new horizontal circle and hand-divided it himself – a singular achievement. Calcutta in 1830 had to be improved upon before they could be used. repaired 2391, and examined 2067. i. Two zenith sectors by Troughton & Simms (received 1869 and 1871). Eichens & Hardy of Paris (1872). vi. Two smaller transit instruments by T.Cooke & Sons. vii. Two 12 in. vertical circles (German form) by Repshold of Hamburg. instruments received in the 19th century. European wife) founded an Observatory at the capital city of Lucknow. the best available instruments for the Observatory. Herbert died in 1833, and was succeeded by Lt – Col. Richard Wilcox (1802-48). 1832, Wilcox had been an astronomical assistant at the GTS, with a salary of Rs 618 pm. the cleverest young man we have’. He was in addition a distinguished oriental scholar 7. Scott Waugh who subsequently succeeded Everest. Troughton • Simms. The clocks were by Molyneux. their requiring time for the previous computations …. ‘. the building itself was ‘unhurt’, all the instruments had perished. vi. a mean time clock by E.Wrench. vii. a 5 in. aperture, 7 ft focus English plan equatorial by Dollond. viii. There was also a smaller equotorial of 4.3 in. aperture and 5 ft focus. John Broun FRS gave up astronomy and concentrated on magnetism and meteorology. over to Madras Observatory which he did in 1849 by closing down his own observatory. Society ‘in aid of the proposed temporary maintenance of an Observatory near Poonah’. pure astronomy emerged as a poor cousin. not allowed to take up the ‘civil’ appointment at Madras Observatory. work. The following instruments were sent out from England. Captain Waterhouse’s wet plates taken at Roorkee ….. ‘. ii. A 6 in. aperture, 82 in. focus equatorial by T.Cooke & Sons, ‘of their usual pattern’. Its construction was supervised by CoLA.Strange. It was also set up at Roorkee. mounting now supports Pogson’s 8 in. telescope at Kodaikanal. all by T.Cooke & Sons. from the Secretary of State for India, solar photography started at Dehra Dun in 1878. In 1880 a bigger photoheliograph – of 6 in. aperture, 9 ft focus objective giving 12 in. continued at Dehra Dun till 1925 with some years of overlap with Kodaikanal. he had shifted in the mean time. Huggins and James E.Keeler were right. a 6 in. achromatic finder with filar micrometer and solar eye piece. have been published using this telescope. ii. 6 in. Cooke photo-visual equatorial telescope. iii. Two prisms of 6 in. aperture for use with the above. iv. 12 in. Cooke siderostat. v. 8 in. horizontal telescope. vi. Large gating spectroscope, by Hilger. vii. An ultraviolet spectrograph, by Grubb. viii. Sidereal clock, by Cooke. ix. Mean time chronometer, Frodsham No 3476. be a one-astronomer observatory, closing down with Naegamvala’s retirement. when the time came for modernization. of Madras Observatory that the question of a new observatory was taken up in earnest. to Kodaikanal; and to place the new observatory under the control of the Central Government. 1893 (see sections 10 and 11). chronograph made by Eichen & Hardy of Paris. photographic lens, and provided the telescope with a new driving clock. a dividing engine was received from the same company. the sun could be photographed not only in calcium K light but also in hydrogen alpha. In 1912 instruments were received from Poona on the closure of Takhtasinghji’s Observatory. spectrohelioscope was received as a gift from the Mount Wilson Observatory. instruments to reach Kodaikanal Observatory. 1968 of an observatory at Kavalur 48. Begumpet in Hyderabad itself with Mr.A.B.Chatwood as the Director. its objective was used at Kodaikanal. observatory near the two villages of Japal and Rangapur, some 50 km from Hyderabad. iii. a set of quartz clocks by Rhode gz Schwartz. Zeiss Jena set up in 1972 (its twin is at Kavalur). was the state-of-the-art instruments made by Evershed himself. International Geophysical Year was used to buy new equipment for solar studies. conducive towards radio and ram wave telescopes. The poet-1960 astronomical facilities will be treated separately. I thank Professor M.G.K. Menon for encouragement, advice, and help in this project. G.Swarup, A.K.Bag & K.S.Shukla) Cambridge Univ. Press. 5. Dictionary of Scientific Biography. 6. Kochhar,R.K. (1989)Ind. J. Hist. Sci. (still in the press). 7. Phillimore,R.H. (1945-58) Historical Records of Survey of India, 4 vols., Dehra Dun. This is the most authentic reference on survey of India. W.H.AIIen & Co, London. Markham is rather sketchy and not always reliable. 9. Pearse,T.D. Asiatic Researches 1, 47. 10. Love,H.D. (1913) Vestiges of Old Madras, 4 vols., John Murray, London. 11. Prinsep,C. (1885) Madras Civil Servants 1741-1858 Trubner & Co, Ludgate Hill. that teak timber could be transported down the Godavari River at a small expense. Topping also attaches an account on the cultivation of pepper. 13. Topping’s description of the Observatory (ref.14) says that it was established in 1787. of Madras (i.e. in 1786 November or December) and he learnt about it on his return. of the MS is without any illustrations; IOLR copy has a sketch of the Observatory. an appendix written a little later. 16. Madras Observatory’s Annual Reports. is Markham (ref.8) who recognizes Topping as a surveyor, but not as Astronomer. pagodas a month, about twice his scientist’s salary. 18. Inventory of 1811 Oct 1. Madras MS records (Indian Institute of Astrophysics). 19. Kochhar,R.K. (1987) Antiquarian Horology 17, 181. Hon’ble Company at present in my charge, 1794 Jul 22 (RAS). (RAS MSS Madras). 22. Pogson,N.R. (1887) Madras Meridian Circle Observations 1862-4, Govt of Madras. 23. Kochhar,R.K. (1985)Bull. Astr. Soc. India. 13, 287. 24. Taylor,T.G. (1832) Madras Astronomical Observations, Vol.1, Madras Observatory. 25. M.N.R.A.S. 14 (1854), 145. 27. Annual Reports by N.R.Pogson 1861-1890. 28. Annum Reports by C.Mich~e Smith 1891-1899. 18-19th Centuries, IHMMR, New DelM ll00 62. 30. M.N.R.A.S (1858) 18, 287. Augustia MaUey … in 18,52-69, Henry & King, London. 1920, R A S London. 33. RAS Papers 49, R.A.S. Archives. 34. DNB wrongly says it was by Lerebours & Secretan. 35. M.N.R.A.S. (1863) 23, 128. volumes, Dehra Dun. Vol. 1 gives a historical summary. 37. Strange,A. (1867) Proc. R. Soc. 15, 385. 39. Reports of the Committee of Solar Physics (1882, 1889), H.M.Stationery Office. 40. Kochhar,R.K. (1987, 1988) Indian Institute of Astrophysics Newsletter 2, 25; 3, 11. 42. Tupman,G.L. (1878) M.N.R.A.S. 38, 509. 43. Kochhar,R. K. (1990) Indian Institute of Astrophysics Newsletter 5, 6. 44. Govt of Madras Public Govt Order 21 Nov 1893 Nos. 940, 941. 45. Lockyer,N. (1898) Report on Indian Observatories. Dallmeyer (1859-1906) who was only 15 at the time of the 1874 transit. 48. Annual Reports of Kodaikanal Observatory 1900-1961. 49. Kochhar,R.K. & Menon,M.G.K. (1982) Bull. Astr. Soc. India. 10, 275. 50. Sanwal,N.B. (1983) in Nizamlah Observatory, Platinum Jubilee Souvenir 1908-1983. Observatory’s taking over by the Government, not its founding. Also see Sanwal,N.B. 51. 25 years of Uttar Pradesh State Observatory, Naini Tal (1979), UPSO. i. Persons of Greek extraction who had already been Persianized and were located in the north-west India were absorbed by the ( upper) Punjab Kshatriya clans. Khatri, Arora and Sood are products of this alliance. ii. These Greeks carried a taint because they were of mixed pedigree,ate beef and otherwise also did not submit themselves to Brahminical discipline. iii. The taint was transferred to the Punjab Kshatriya clans who accepted them in marriage. iv. Khatris in Punjab were able to enlist Brahmin support for themselves and self-consciously insisted on calling themselves Khatri. v. Their brethren who migrated to Punjab hills were not so fortunate. Since the dominant position there was held by the Rajputs, and since Brahmin orthodoxy was strong , they were pushed down in the hierarchy and dubbed Sood. Note that both Khatri and Sood are derived from varna names. vi. For some reason, Aroras split from the Khatris and established matrimonial alliances in lower Punjab and Sind. vii. In course of time, structure appeared within the Khatri caste, which loosely split into Char-ghar and Bunjai. From among the later, Sarin and Khukhrain became autonomous. Punjabi Khatris are a numerically small but otherwise successful and influential caste group. Many students of current affairs probably know that the community has contributed two prime ministers to India: Inder Kumar Gujral and Dr Manmohan Singh who does not use his Kohli surname. Though their caste appellation is obviously derived from Kshatriya, denoting the ancient Indian warrior class, the Khatris have traditionally been engaged in professions associated elsewhere with Banias and Kayasthas. They have thus been predominantly though not exclusively traders, merchants and bankers as well as administrative and revenue officials. From their original habitat in (the undivided) Punjab, the Khatris spread eastwards as far as West Bengal and Orissa and southwards into Gujarat. One of the biggest landowners in the erstwhile Bengal presidency was the Raja of Burdwan, a Punjabi Khatri from the Kapur clan whose ancestor had come over in the mid 17th century as a petty revenue official. The Mahtabs of Orissa are also believed to be of Khatri extraction. Punjabi Khatris became conscious of their caste identity about 125 years ago. The British with their fetish for categorization and documentation felt that all extant Indian castes should be fit into the Vedic framework of the four varnas. “It was decided by the Government of India in 1885 to make a comprehensive field survey for precise information about the way of life, manners and customs, rituals, marriage practices etc. of the tribes, castes, sub-castes of the country for better administration and ethnographic research.” The task was assigned to a Bengal Indian Civil Service Officer, Herbert Hope Risley, who in 1891-92 published his The Tribes and Castes of Bengal, after “six years of intensive study and survey”. Much to the chagrin of the Khatris through out north India, Risley declared that “If then, it is at all necessary to connect the Khatris with the ancient fourfold system of castes, the only group to which we can affiliate them is the Vaisyas” ( quoted in Seth 1905:iii). This was unacceptable to the Khatris for whom the villain of the piece was “One Babu Jogendra Nath Bhattacharya, M.A., of Bengal”. Risley had based his conclusion on the study by Bhattacharya who in turn was alleged to have deliberately degraded the Khatris “ under the influence of a personal grudge against the Burdwan Raj, publicly attributed by the Honourable Raja Banbihari Kapur, Manager of the State, in his speech delivered before the Khatri Conference at Bareilly, in June 1901” ( Seth 1905:i). The Khatris marshalled a whole lot of evidence in favour of their higher social status and, wishing to be suitably classified in the 1901 census, submitted a “manuscript volume of about 300 pages of foolscap, dealing with the question in detail” to the census superintendent for North West Province and Oudh (corresponding to the present Uttar Pradesh). The response of the authorities was rather unexpected. It was now proposed to classify “the Khattris, the Kurmis and the Kayasthas” all in a new group called “Castes allied to Kshatriyas who are considered to be of high social standing , though their claim is not universally admitted” (Seth 1905:viii). This “night-mare of impending social degradation” propelled Khatris into concerted action. A three-day conference of “more than four hundred representatives of the numerous Khattri Sabhas, Committees and Associations scattered over the country” was held in Bareilly in June 1901 under the chairmanship of Raja Banbihari Kapur (referred to above).The Khatri leadership was eventually able to convince the British authorities that “the Khattris are generally believed to be the modern representatives of the Kshatriyas of Hindu tradition” (Seth 1905: xiv). It is noteworthy that the debate centred on the position of Khatris vis-à-vis Vaishyas , Kayasthas and other castes in Bengal and ( what is now ) Uttar Pradesh rather than in the original Khatri habitat, Punjab. The results of the campaign were summarized in a 1905 book “A Brief Ethnological Survey of the Khattris” written by Moti Lal Seth, deputy inspector of schools and member Khattri Hitkari Association, Agra. This remains one of the primary sources of information on Khatris. A valuable additional and more general source is the three-volume Glossary of the Tribes and Castes of the Punjab and the North West Frontier Province, compiled by a British civil servant Horace Arthur Rose, superintendent of Punjab census operations. The Glossary is based on Punjab census reports of 1881 and 1891 prepared by Denzil Charles Jeff Ibbetson and Edward Douglas Maclagen respectively . It also “embodies some of the materials collected in the Ethnological Survey of India which was begun in 1900, under the scheme initiated by Sir Herbert Risley”. It must be stated at the outset that in the following, the cultural and geographical setting, rules of endogamy and exogamy as well as hierarchical ordering, etc., that are described here are as they obtained a century ago, even though present tense is employed . There is no implicit approval or disapproval of any practice that is reported. Needless to say, various social groups are far more flexible now than they were in the past. The changes have been particularly rapid after the partition. A caste is defined by rules of endogamy. It comprises a number of sub-castes or clans which practice exogamy. People do not marry within their clan; they marry into other clans within the caste. The social and ritual status of a caste is assigned by the priestly class. Non-acceptance by the Brahmins of uncooked food and drinking water from a caste group would place it way down on the hierarchical ladder. (Since food is grown by castes ranked low, uncooked food can be accepted.) One of the principal arguments proffered by the Khatris in support of their claim for a high social status was that the Sarasvat Brahmins accepted cooked food from them. The varna system that prevailed in very ancient times was a simple one. The current caste system is far too complex to be related to the varna system in any straight forward manner. Brahmins and Banias are probably the only two caste groups that conform to the ancient varna categories. Perusal of a Sanskrit dictionary would reveal that many castes formed through intermarriages between various varnas. Thus Modak is described as a “ mixed tribe” that “ sprung from a Kshatriya father and Sudra mother” ( Apte 1970:449 ). Also people who came into India from outside at different times were obviously accommodated into the caste system. Castes have split; new castes have been created; and there are examples of vertical mobility. People have migrated within the country and carried their caste identity with them. But the status assigned to them in their new setting depended on the extant power structure and availability of slots. While we try to created the big picture, we should keep in mind caste equations were primarily local. It is not possible to construct socio-history of any caste group, because of total absence of authenticated source material. There are a large number of legends. It is difficult to say when these legends were created and what factual information they contain. Many legends are a recent creation. When communities prosper and become influential, they seek to upgrade their status retrospectively. There is a widespread tendency to trace the origin of castes, sub-castes and family names to ancient texts. Nobody has ever attributed the origin of their family or clan name to a dishonorable act by their ancestors! If a group was alienated from the main body, it must necessarily have been small to begin with. It would however grow through marriage alliances elsewhere. Since a caste is endogamous, it must attain a certain minimum size for maintaining its identity. If it becomes too big it must split. It is ironical that the quest for a higher social status within Indian society required approval from the colonial rulers. Since the Europeans were obsessed with the Sanskrit India, upper-caste Indian themselves went overboard in linking themselves to ancient India, as if there were no intermediary evolutionary stages between the remote antiquity and the colonial present. The remaining part of this essay is organized as follows. We first review the structure within the Khatri caste and then examine its relationship to other castes (Arora, Bhatia and Sood) which are, or claim to be, related. Aroras are recognized as coming from the same ethnic stock as Khatris but are ranked lower, while Bhatias have always been considered to be separate. Soods, residing in Punjab hills, have not figured in the reckoning. I shall however argue that they are probably closer to Khatri-Arora than hitherto conceded. I shall then present my own hypothesis on the origin of the Khatri caste and also suggest some specific DNA tests to test the hypothesis. Aroras like the Khatris are urbanite and engaged in similar professions. The Aroras are far more numerous than the Khatris and spread over much larger territory. The Khatris were confined to upper Punjab while the Aroras inhabited not only upper Punjab but also lower Punjab and Sind. In the upper Punjab, the Aroras were more concentrated towards the west while the major Khatri concentration was between the rivers Ravi and Beas. Satluj was the eastern boundary for both. Interestingly, the Bania concentration lay towards the east of Satluj. The absence of Banias in Punjab proper and made it possible for Khatris and Aroras to take up the former’s profession. It may be noted in passing that the upper Punjab Aroras are largely Sikh while their southern counterparts are Hindu. The Khatris however are mostly Hindus. This is interesting in view of the fact that Sikh Gurus were all Khatri ( see below). The primary division among the Khatris is between Char-ghar or Char-jati ( four-clans) and Bavanjai or Bunjai ( from bavinja, 52 in Punjabi). The sub – castes comprising the Char-ghar are Kapoor, Khanna, Malhotra or Mehra, and Seth. In Uttar Pradesh , Malhotra is known as Mehrotra and Seth and Tandon are equivalent. The total number of Bunjai sub-castes is of course much higher than 52. The relationship between these two groups is non-symmetrical. The Char-ghars marry their daughters among themselves but condescendingly accept daughters-in–law from among the Bunjai. Since the Bunjai are a party to this custom, this means that they accept a lower position vis–a-vis the Char-ghar on the social totem pole. Normally while arranging the marriage of a boy or a girl, the partner should not be chosen from the clan of either the father or the mother. However the Char-ghars, because of the small number of constituent clans, do not follow this dictum in entirety. While the father’s clan is kept out in toto, only the closely related part of mother’s clan is excluded so that two and a half clans are available for striking a matrimonial alliance within the group. For this reason, Char-ghar are also known as Dhai-ghar (Dhai means two and a half) (Ibbetson quoted in Seth 1905:175). It would thus be erroneous to consider Dhai-ghar and Char-ghar as distinct entities as is sometimes done. There are in addition groups known as 5-jati, 6-jati or 12-jati ( Sometimes the word jati is replaced by ghar). They seem to represent marriage – driven clustering among contiguously placed clans. They have no other significance. The Khatri structure as recorded by Seth ( 1905) and Rose (1911) is over-constructed. It is a matter of immense proud for the Khatri community that the Sikh Gurus were all Khatris. Guru Nanak was a Bedi; Guru Angad Trehan; and Guru Amar Das Bhalla. He was succeeded by his son-in-law, Guru Ram Das, a Sodhi. All the subsequent Gurus came from the same family. Bichitra Natak names Rama’s sons Lava and Kush as ancestors of Sodhi and Bedi clans respectively ( Seth 1905: 61-62). There are two offshoots of the Bunjai, namely Khukhrain (spelt variously) and Sarin. The Khukhrain are said to be descendents of Khatris who “joined the Khokhars in rebellion and whom other Khatris were afraid to marry” ( Rose 1911:513). “This group consisted of 8 sections originally”: Anand, Bhasin, Chaddha, Kohli, Sabharwal, Sahni, Sethi and Suri. To these “Chandok have been affiliated in Peshawar, and in Patiala the Kannan section is said to belong to this group” ( Rose 1911 II:509). Seth ( 1905: 215-216) inserts Kari (?) into the list, which is difficult to identify. Ghai are also said to be Khukhrain. According to Wikipedia and web sites maintained by the Khukhrains, they were predominantly located in the area between rivers Jhelum and Chenab with the town of Bhera as their main centre. Interestingly Mohyal Brahmins, rather than the Sarasvats, officiated as their priests.While the isolation of the Khukhrain was at least in part due to geography, the separation of Sarin came about for reasons of social orthodoxy. It is said that the “ entire organization” of the Khatris “ underwent a complete change” in the time of Sultan Allauddin Khalji ( r. 1296-1316) on the question of widow remarriage ( Seth 1905 :171). On the death of a large number of Khatri soldiers, a royal proposal was made for remarrying the young war widows. The proposal was eventually abandoned because of vehement opposition from within the community. A small band of Khatris who had supported the proposal were isolated as Sarins. For the rest, the agitation created a social hierarchy was created ; the stronger the opposition the higher the status. Interestingly, Seth’s account (Seth 1905: 171-175) is couched in modern idiom. One gets the distinct impression that he is backdating his own campaign against the colonial ethnologists! Sample the following: “The subject became the common topic of the day in all Khatri households…monstrous Khattri meetings were held in all parts of the country; and party after party began to pour into the capital”. “Crowded meetings were held at Delhi to submit protests against the proposal to the Emperor; a deputation waited on the Durbar to represent the case… The excitement became a mania and the mania a frenzy”. The royal supporters “could only get a limited number of signatures to what we may call The Khattri Widow Remarriage Bill” (Seth 1905:171-173). According to Seth, this is when hierarchical ordering within the Khatris was created. “The primary movers of the agitation were considered to be the brightest jewels of their race and given the now proud title of dhai ghars”. They were followed by the Char- ghar, 12-ghar and the Bunjai ( Seth 1905:174). There are many problems with this story. While the episode may well explain the isolation of the Sarins, it cannot explain the structure within the community in a satisfactory manner. As we have already seen, Dhai-ghar do not have an identity distinct from the Char-ghar, and 12-ghar.etc., do not have a separate entity. It is not clear why social leadership in the hands of the Char-ghar should lead to their refusing to marry their daughters into the Bunjai. Significantly, there does not appear to be any mention of the episode in the Sultanate chronicles of the time. One wonders whether it was an historical event at all. It may be instructive to narrate the story of birth of a clan preserved as oral history by the clan itself. A girl was married into the Nanda family . A disaster struck her parents’ family which killed all its members except for her little brother. This orphaned boy was brought up in his married sister’s household . The boy became the progenitor of a new clan, which was named Kochhar for the following reason. The little boy was carried by his sister on her side lap ( called kuchhad in Punjabi).The rescue took place on Baisakhi day which is celebrated as the founders’ day by the Kochhars. As part of the commemoration, Dadi svaad da poorha is cooked, as a homage to a holy man who fed the brother-sister duo on their foot journey. Since now the Nandas became the foster parents of the Kochhars, they would not intermarry. Notably, the Nandas do have an assigned Gotra as can be expected from an old clan, but Kochhar have none. Although the Kochhars do not carry any living memory of the original sub-caste of their progenitor, according to Rose ( 1911 II:522), he was a Seth. If this be true, it is a remarkable piece of information. The Kochhar as also the Nanda now belong to the lower-ranking Bunjai while the Seth are from the Char-ghar. The creation of the Kochhar clan thus belongs to an era when a Seth girl could be married to a Nanda. Beri are said to be an off-shoot of Chopra ( Rose 1911:517), although details are not known. Khatris claim that they are the survivors of Parshuram’s anti-Kshatriya campaigns. Their ancestors took shelter with a Vaishya friend while their purohits, the Sarasvat Brahmins, interceded on their behalf with Parshuram who in turn spared their life on the condition that they give up arms and take to trade ( Seth 1905:53). There is another version of the story. After exterminating the Kshatriyas, Parshuram came looking for pregnant women who had taken shelter with Sarasvat Brahmins. The hosts declared the Kshatriya women to be their own daughters and as a proof thereof partook food cooked by the Khatri women ( Seth 1905:64). This legend is hard to accept at face value. Parshuram belonged to the Bhrigu clan and is said to have lived some 30 generations before Rama and 60 generations before Krishna. According to the Puranas, the target of his wrath were not all Kshatriyas but a specific section called the Haihaya. Accounts of Parshuram’s battles are grossly exaggerated. Surely there were Kshatriyas, including the Haihaya, in the post-Parshuram period (See Pargiter (1922) for details). As far as the Khatri community is considered, if it had taken to trade that early , it is unimaginable that the Kshatriya label would have stuck to it. This legend runs counter to the one cited above which makes the Sodhi and the Bedi direct descendents of Rama. It is very likely that the Parshuram legend is a back formation consistent with known Khatri attributes. While the Khatris escaped Parshuram’s wrath through the intervention of their purohits, allegedly the would-be Aroras saved their skin by claiming that they were not Kshatriyas but some others (Aur in Hindi).They were accordingly dubbed Aroras and made to constitute a separate endogamous group. The legend must have been influential in its time because it succeeded in putting the Aroras on the back foot. The Aroras also trace their origin to Parshuram’s time, but claim that their eponymous king Arur truthfully told Parshuram that he indeed was a Kshatriya. The sage was pleased to spare and bless him. The logic here seems to be rather convoluted. If Parshuram could spare Arur for telling the truth, why did he exterminate the others? No matter when and why the Khatri – Arora split occurred, it must have taken place in the upper Punjab where the Khatris lived. Once the Aroras were refused matrimonial alliances by the Khatris, lower Punjab and Sind were probably added to the Arora fold through marriages. The legend, no matter how unhistorical, does convey the important information that the Aroras and Khatris are accepted as being ethnically the same people, and that they separated before structure developed among the Khatri caste. I now propose a hypothesis to explain their origin. It would seem that the insistence of the Punjabi Khatris to flaunt their Kshatriya antecedents was a defensive act, whose purpose was to divert attention from an un-Kshatriya taint they carried, This taint , I would like to suggest , was an alliance with the settlers of Greek extraction. It should be kept in mind that what have been called Indo-Greeks had already been Persianized. Contrary to general perception, north-west India’s acquaintance with Greek elements began not with the Macedonian king Alexander’s invasion ( 326 BC) but two centuries previously during the Achaemenid empire of Iran which at its peak extended from Indus in the east to the Aegean Sea in the west. During the period 546-448 BC, the Persians made repeated efforts to annex Greece. While they were thwarted in their attempts to capture the mainland, they were able to subjugate the Greek states in Asia Minor, including Ionia (from which the Sanskrit term Yavana is believed to come). One of the consequences of the intermittent Greco-Persian wars was the establishment of Greek settlements in the eastern parts of the Achaemenid empire that is in and to the north of the Hindu Kush region. There were two type of settlers. For some, Hindu Kush was a safe haven. They had earned the wrath of their compatriots by collaborating with the invaders and therefore had to be shifted out for their own safety. For others, Hindu Kush was a Siberia. They had valiantly raised the banner of revolt against the invaders and were consequently deported. In course of time both these types of settlers married locally and partially de-Hellenized themselves. When Alexander encountered them, he judged them by the actions of their ancestors. Thus citizens of the small hill state of Nysa ( between rivers Kabul and Indus) were treated with consideration , while the Branchidae`( located probably between Balkh and Samarqand) were said to be massacred because their ancestors had yielded up the treasure of the temple of Apollo at Didyma near Miletus to Xerxes ( Narain 1957: 3). There were pockets of Greek influence in the Punjab plains as well. Greek historians mention Alexander’s friendly encounter with a petty king Sophytes, who either ruled the territory between rivers Indus and Jhelum or, what is more likely, between Jhelum and Chenab. Direct proof of Sophytes’ Greek extraction/ connection has come from the discovery of a silver drachma. A notable feature of the kingdom of Sophytes was that it attached “uncommon value” to physical beauty. While contracting marriage, the people “did not seek an alliance with high birth but made their choice by the looks, for beauty in the children was highly appreciated”. The love for beauty was carried to an extreme. If “the officers entrusted with the medical inspection of the infants” noticed “any thing deformed or defective” , the children were ordered to be killed (Raychaudhuri 1972 :222). Greek historians also mention a people called Kathaians who lived to the east of river Ravi and gave a tough fight to Alexander’s army. They also valued beauty very much to the extent that the “handsomest man was chosen as king” (Raychaudhuri 1972 :222). As is well known Alexander’s invasion was followed by the establishment of an empire by Chandragupta Maurya. His grandson Ashoka (304-232 BC) in his edicts refers to Yavana and Kamboja on his north-western frontier. ( Similarly, there are numerous literary references as well.) Within 25 years of Ashoka’s death, the Greeks from Bactria ( Balkh) came down to the Punjab plains. Demetrios (early 2nd century BC) appears to have held Punjab, as well as lower Indus, Malwa, Gujarat and probably also Kashmir. He was the first one to introduce bilingual coinage with inscriptions in Greek and Kharoshthi. After him the kingdom split into two warring parts with Jhelum as the dividing line. The most prominent later king was Menander ( c. 150 BC) who decoupled himself from Bactria and is known to Buddhist literature as Milind. His capital has been identified with Sialkot. The Indo-Greek rule lingered on till about 50BC, when its last king Hermaeus was dethroned by the Pahlava who also came from the north-west. The Indo-Greeks were unable to expand into mid-India. They and their early internecine wars were duly taken note of by the Puranas: “There will be Yavanas here by reason of religious feeling or ambition or plunder; they will not be kings solemnly anointed but will follow evil customs by reason of the corruption of the age. Massacring women and children and killing one another, kings will enjoy the earth at the end of the Kali age”. Similarly, Gargi Samhita states that “there will be a cruel, dreadful war in their own kingdom, caused between themselves” (Raychaudhuri 1972:343). Dharma-sastras do not think much of the Greeks. Atreya Dharma- sastra , which is quoted by Manu-smrti, mentions Yavanas among non-Aryan tribes ( Kane 1990 : 261). Manu-smrti classifies Yavanas as dasyus who speak mleccha language ( Kane 1990 : 326) and forbids Brahmins to dwell in the kingdom of a sudra ( Kane 1990 : 335). Gautama Dharma-sastra quotes the widely held view that the offspring of a Kshatriya male and a Sudra female was designated a Yavana ( Kane 1990 : 35). It is noteworthy that Gautama forbids beef eating while Apastamba “seems to allow it and cites the Vajasaneyka for support” (Kane 1990 : 1990 :73). Significantly the latter does not mention Yavanas ( Kane 1990 : 73). It is recorded that a Damodara made the Yavanas of Mulsthana ( modern Multan) give up cow slaughter ( Kane 1990 : 806). It would thus seem that the Persianized Greeks, or Yavanas, were looked down upon for their mixed pedigree, for eating beef, and more generally for not subjecting themselves to the Brahminical discipline. What happened to the Yavanas? It is noteworthy that while the name Kamboja survives as a Punjab caste group, there is no preservation of Yavana in any contemporary caste or ethnic group. I would like to suggest that the Yavanas were absorbed by the Punjabi Kshatriya clans through intermarriage. Product of this alliance was the Khatri caste. Since the Yavanas had been dubbed outsiders or half-castes by the Dharma-sastras , the Khatris deliberately shoved their Greek connection under the carpet, tenaciously stuck to the Kshatriya label, and emphasized their ancient lineage. I would like to further suggest that the Sood of Punjab hills are the same people. It is noteworthy that the Khatri could claim and obtain high-caste status because their claim was supported by the Sarasvat Brahmins. Since the dominant slot in the hills was already occupied by the Rajputs, Sood were pushed down the hierarchy. It is significant that both the terms Khatri and Sood are derived from the ancient Varna names Kshatriya and Shudra; they are probably two sides of the same coin. It was stated in the Khatri claims for a high-caste status that their rituals are in accordance with Manusmriti. Going strictly by the book seems to be a deliberate attempt at Sanskritization. It is noteworthy that Brahmins do not have much of a hold in Punjab unlike in the Madhyadesh, for example. The Khatri community is clan-driven rather than gotra-driven. Some of the clans have two gotras instead of one. In some cases, more than one clan share the same gotra. In addition there are cases where the clans do not have any gotra at all. While a Khatri’s notions about his own handsomeness may be exaggerated, the incidence of fair complexion and sharp features among Khatris seems to be higher than the national average. This may be due to the Greek strain in them. Another contributing factor may have been the beauty-enhancing selective breeding prevalent among the subjects of Sophytes and probably also among the Kathaians, as noticed earlier. The name Sophytes seems to be cognate with Sobti , a Punjabi Khatri clan name. Iran also has a similar sounding surname, Sabouti. To sum up our discussion so far, we have made the following points. i. Persons of Greek extraction who had already been Persianized and were located in the north-west India were absorbed by the (upper) Punjab Kshatriya clans. Khatri, Arora and Sood are products of this alliance. ii. These Greeks carried a taint because they ate beef and otherwise also did not submit themselves to Brahminical discipline. The taint was transferred to the Kshatriya clans which accepted then in marriagel. iii. Khatris in Punjab were able to enlist Brahmin support for themselves and self-consciously insisted on calling themselves Khatri. iv. Their brethren who migrated to Punjab hills were not so fortunate. Since the dominant position there was held by the Rajputs, and since Brahmin orthodoxy was strong , they were pushed down in the hierarchy and dubbed Sood. Note that both Khatri and Sood are derived from varna names. v. For some reason, Aroras split from the Khatris and established matrimonial alliances in lower Punjab and Sind. vi. In course of time, structure appeared within the Khatri caste, which loosely split into Char-ghar and Bunjai. From among the later, Sarin and Khukhrain became autonomous. The above discussion is admittedly speculative. There is no reliable source material on the subject and it is not possible to establish any chronology. Fortunately, recent developments in biology can be combined with social anthropology to obtain valuable clues on questions such as the history of Khatris and their relationship with other castes. We can take blood samples from volunteers drawn from different well- defined social groups like Char-ghar; Bunjai ; Sarin; Khukhrain ; Aroras from upper Punjab and from lower Punjab; Soods, Bhatias, etc., and study the results of DNA fingerprinting. What results can be expected from such a study? The separation between Khatris and Soods should be small. Since because of geographical isolation the Soods have been tightly endogamous, their genetic study can be expected to provide valuable information. The separation between Khatris and the Aroras from upper Punjab should be less than that between Khatris and the Aroras from lower Punjab. Given the lack of any worthwhile material on the history of Indian castes, sub-castes and clans , it is time new biology was listed as an aid. The old advice to a young researcher is very relevant here: Try something and see what happens.
2019-04-18T22:29:37Z
https://rajeshkochhar.com/2009/05/
In this how to use WordPress tutorial, we’ll provide an introduction to WordPress for beginners, to help you get familiar with the fundamentals of WordPress. This WordPress tutorial for beginners, will cover what WordPress is, how it can be used, the process of installing WordPress, as well as the practical aspects of installing themes and plugins, and creating and managing content (including posts, pages, menus, and widgets). We’ve created a step by step post that accompanies this video, that you can follow along with on the OHKLYN website at OHKLYN o-h-k-l-y-n.com (there will be a direct link in the description below). In that post, you’ll find the written instructions, as well as any links mentioned in this video. So I would recommend opening the post up in a new tab, and following along. We’ve broken this tutorial down into eight sections, these include: What is WordPress? How to install WordPress, an Overview of the WordPress dashboard Understanding the WordPress settings How to create and manage users in WordPress How to amend the appearance of a WordPress website or blog How to add additional functionality, and How to create and manage content in WordPress The various sections will be timestamped in the description below to make it easy to navigate through this tutorial. For a step by step tutorial on how to create a WordPress website or blog, check out one of our free tutorials on the OHKLYN website, or on our YouTube channel. We’ll also add any related videos in the description below. You will be able to follow along with this tutorial and setup your own WordPress website or blog. We’ve added some discount links for hosting and themes, that you can access in the description below or on the OHKLYN website here. So, let’s get to it! Firstly, What is WordPress? WordPress is what’s referred to as a Content Management System, or CMS for short. The objective of a CMS, is to take care of the technical aspects of web publishing, allowing the user to focus on creating great content. WordPress is open source software, meaning that it isn’t owned by a specific individual or organisation, and is free to use, improve, or extend. You can use the WordPress software in a number of ways, however the two most common ways of creating a publicly available WordPress website, are via the hosted or SaaS version (WordPress.com), or the self-hosted version (WordPress.org). In addition to the two primary versions for creating a WordPress website or blog, you can also install WordPress locally on you PC or Mac. Let’s explore the three most common ways of leveraging the WordPress software. Firstly, you could install WordPress locally – To install WordPress locally on your computer, you will need to download a tool like MAMP. Once you’ve installed MAMP on your computer, you’ll download the latest version of WordPress, from WordPress.org, and install it on your localhost. We’ll put together a post and video on this shortly. Then there is the hosted version of WordPress (referred to as WordPress.com) – The hosted or SaaS version of WordPress allows you to leverage the WordPress software, without needing to concern yourself with domain, and hosting management. Your site is hosted by WordPress.com, which will make getting started a lot easier for some. However, the trade-off is that this platform enforces a number of restrictions that impact the design, functionality, and flexibility of your site. The most popular option though, is the self hosted version (referred to as WordPress.org) – the self-hosted version of WordPress, removes all of these restrictions. However, you will need to secure your own domain and hosting, and install WordPress on your desired domain. This will be a new experience for a number of users. Fortunately, we have created a number of free WordPress tutorials that walk you through this process step by step. And we’ll cover this off quickly for you now. To install WordPress, you will need to secure your domain and set up hosting for that domain. We’ll go through the steps of how to do that now, and give you two hosting options as well as provide discount links for each option. Your domain, or url – is the web address for your website, and is what users will type into their browsers to access your site. For OHKLYN it’s ‘OHKLYN o-h-k-l-y-n.com’. Pick something that’s relevant and memorable. Hosting, is the process of storing the content and data for your online store on a web server, and serving it to users. For this tutorial, there’s two options to choose from, and we’ll quickly walk you through getting started with each. The first is the cheaper shared hosting option through Bluehost, and the second is the premium option through WP Engine. We use both providers, OHKLYN is hosted on WP Engine, and our demo sites are hosted on bluehost. There are discount links to each option below, and on the OHKLYN website. Firstly, for those who want to go with the cheaper option let’s register your free domain and set up hosting with Bluehost. For those that want to go with WP Engine skip ahead to the next section, and follow the instructions. There’s a link in the description below that gives you access to discount hosting through bluehost, as well as a free domain name. If you’re following along on the OHKLYN website, you can click on this button here to get access. Here is a list of the types of domains that are included for free, some of which include a: .com .online .store .net .org .co .club Now, if you’ve already purchased your domain, or you want to purchase an alternative top level domain (such as .shop, or you want a country specific domain such as .co.uk, or .com.au), you can purchase that domain through a registrar like GoDaddy, Crazy domains or any other domain registrar (I’ll add some links below). If you go with that option, or as I mentioned – if you’ve already secured your domain name, all you’ll need to do then is change what’s called the Domain nameservers to point at bluehost (which will be your hosting provider). Fortunately, we’ve written an article, and a step by step guide on how to do this (I’ll add the links to these guides in the description box). For the bluehost option, we’ll take care of both registering your domain, and setting up hosting, as well as installing WordPress together. So, to do this follow the bluehost link in the description below, or if you’re on the OHKLYN website, follow this button here. Bluehost is an affiliate partner of OHKLYN, so by using those links, not only do you get access to discount hosting and a free domain, but they’ll set aside a few dollars from their marketing budget to help fund future free videos like this one. So we appreciate you using the link provided. If you plan on creating an eCommerce website and want to process credit card payments on your site, you will need an SSL certificate. Alternatively, if you just want to process payments externally via PayPal, you won’t need an SSL certificate. If you’re going to use PayPal as your sole payment gateway, you can go with the the standard shared hosting plan, and click the ‘Get Started Now’ button to select your hosting plan and register your free domain. If you want to accept credit card payments on your site, then under the ’Hosting’ option in the menu, click on WooCommerce hosting, and then ‘Get started now’. Check out our tutorial on How to Create an eCommerce Website if that’s what you wanna do, as it will take you through the steps of how to do it. The link will be in the description below, on the OHKLYN website, and on our YouTube channel. Regardless of which option you went with, you’ll then select the plan that’s right for you. If you intend to have just the one domain, then the first option we’ll be fine, alternatively if you want to have multiple domains on the one hosting account, then you’ll need to select one of the other plans. You can always amend this down the track. And the great thing is that you get a 30 day money back guarantee on either plan, so you can get started risk-free. For this example though, I’ll go with the middle option. To get your free domain name, you’ll enter the desired domain name for your website, blog, or online store into the ‘new domain’ field, select the domain extension (for example .store), and hit next. If the domain name isn’t available, you’ll get an error message and will need to either select an alternate domain name, try to contact the owner of the domain to purchase it from them, or select another top level domain extension. If you’ve already purchased your domain name, enter your domain in the ‘transfer domain’ field and select ‘Next’ (remember to review the article on how to change the DNS records to point at Bluehost). To set up your hosting account enter in the required account information here. In the package information section, choose your desired hosting term and domain add-on preferences. I recommend selecting ‘domain privacy protection’ so that your personal information that’s associated to your domain, isn’t publicly available (this is optional of course). Once you’ve entered in the required information, add your payment details, review the terms, and select ‘Submit’. Once you’ve done that, you’ll be taken to this page here. You will have been sent a confirmation email to the designated email address on the account. You will need to create a password for your hosting account. To do that, click on ‘create your password’. This will take you to the Bluehost tab within the back-end of your WordPress site. To access your WordPress dashboard, click on ‘dashboard’ in the menu on the left. There will be a number of notifications, that you can action, or dismiss by clicking on the ‘x’ in the top right corner. You can amend what’s visible on your dashboard by clicking on the ‘screen options’ dropdown in the top right, and checking or unchecking the boxes. A number of additional plugins will be installed. You can view these by hovering over ‘plugins’ in the admin menu on the left, and selecting ‘installed plugins’. In addition to the standard WordPress plugins, Bluehost will install, JetPack, Mojo Marketplace, OptinMonster, and WPForms. You can leave these active, or choose to deactivate and delete these plugins. I’ll leave this up to you. I’ll delete mine, as I like to use as few plugins as possible. This can be done in bulk, by selecting the checkbox next to the plugins, choosing deactivate from the bulk actions dropdown and then clicking apply. I’ll then delete all of the selected plugins. Then return back to my WordPress dashboard. If I enter in my domain name, I’ll see that WordPress is now installed. Congratulations! You officially have a new website! For those that have gone through registering your domain and setting up hosting with bluehost, you can move on to the next step which will be an overview of the WordPress dashboard.. Click on the timestamp in the description below to skip ahead. For those who want faster hosting, or more consistent hosting, and wanna go with a premium hosting solution, we’ll go through the steps of setting up hosting with WP Engine. As part of the OHKLYN community, you’re entitled to a discount by following the link provided, which is either in the description box below, or if you’re following along on the OHKLYN website, you can click on this link here. That will take us to the WP engine site, we’ll scroll down until we see the different plans. If you just want to set up a single website, the personal plan will be fine, you can always add additional domains at any stage; however if you going to manage multiple websites, then you may want to look at the other plans. For this example, we’ll go with the personal option, that will take us through to this page here. By selecting the annual option we get two months free, and in addition to that, through being part of OHKLYN community, you get access to a 20% off coupon on top of that. To get access to that, click the link provided, or enter OHKLYN o-h-k-l-y-n.com/go/wp-engine which will take you through to the WP Engine site, and include the discount. If for whatever reason, the discount code doesn’t carry across, then signup to our newsletter and you’ll be sent a welcome email with the WP-Engine discount code included. To create your account, enter your email, account name, select which Data Center you want to use. There are a number of options to choose from, pick the location that’s closest to you, or your intended audience. Then, input your name, scroll down to the billing information, and add your billing info. Review the terms and conditions, and then click on Create My Site. Once you’ve done that, your WP engine portal will be in the process of being built. You can confirm the details here, the Plan Details are on the left hand side, and your Billing Information is on the right. If we scroll down, we’ll see the details of our account and username, your password will be sent to your email account, and then below that we’ve got the details of our URL. On the OHKLYN website, there’s a link to a video that goes through how you complete your set up process, so I’d recommend clicking on that to finalize your account set up. The cool thing about WP engine is you won’t need to install WordPress, they do that for you. There are some tools to help with getting started, so if you need to migrate an existing site then there’s a tool to help you with that. The best thing about managed hosting is that you’ve got full support, so if there’s anything that’s unique, or you’re struggling with anything in particular, you can contact them directly and they’ll be able to help you through the process. You will have received an email from WP engine, follow that link through to enter your password, and that will take you to your WP engine portal which looks like this. Pause this video, and once you’ve pointed your domain’s A record at WP Engine, and WordPress is installed for you, we can continue on to the next section Once again, follow the link on the OHKLYN post here to the video on how to point your domain’s A record at WP Engine, and finalize your hosting setup. I’ve installed WordPress in a development environment. It’s a clean WordPress install so it should look the same, if it’s slightly different don’t worry – the fundamentals will all be the same. I do a lot of WordPress website and blog development for clients and prefer to work in a staging or development environment before pushing a site live, however, it isn’t always necessary. Ok, so the WordPress dashboard or admin panel is broken down into 3 main sections: at the top we have the WordPress toolbar, the menu or admin menu is located on the left-hand side, and the main admin area is in the middle, where we’ll do most of our work. I’ll give you a brief overview of each section now – if you want to take a deeper look then check out our Introduction to WordPress for Beginners guide and video. The WordPress toolbar at the top is dynamic and adjusts the available options depending on which page you’re on, and if you’re viewing the page from the front or the backend. From the left, you have the WordPress logo that acts as a dropdown to provide information about WordPress and some useful links. Next is the site name, when clicked, this will navigate you to the front-end of your website. If a newer version of WordPress is available or any plugins on your site need updating, a conditional button will appear here next to your site name. You then have a count of the comments held for moderation. Next is the New button, hovering over this gives you the option to create a new post, new media item, new page, or a new user. On the right hand side, you have “Howdy,” and your name followed by a dummy avatar, which can be updated via Gravitar.com (which stands for globally recognized avatar) – we’ll update this later when we customize your website. By hovering over your name you can access your profile information and settings, as well as the logout button for your website. If we click on the site name that will take us to the front of your website (yes it’s uninspiring at the moment, but that will soon change). You’ll see that the toolbar options have changed. From the left we still have our WordPress logo as before, however if we hover over the site name, you have more navigation options. You can head back to the dashboard, manage your theme, widgets, and site menus. Next to the site name, is the customize option which takes you to your theme customizer settings. We’ll go through this in more detail later. If we click into a post or page, you’ll see we now have the option to edit the post or page. By clicking on edit post (or page), you will be taken directly to the backend of that post or page to make edits – this is a powerful feature and one that will save you a lot of time. When you add additional plugins, and with some WordPress themes you will also have additional features within the toolbar. Ok so back to our main dashboard. The Admin menu located to the left of your dashboard is separated into 3 main sections, those are: The Dashboard section, the Content Management section, and the Site Administration section. The Dashboard section provides easy access to the Dashboard, updates, and additional plugin features. The Content Management section is where you create and manage Posts, Pages, Media items, Comments and additional plugin features. The Site Administration section is where you configure the design and appearance settings for your website (including selecting the active theme for your website, creating and managing menus, widgets, and customizing your website’s theme). It’s also where we manage plugins, users, control global WordPress settings, and activated theme and plugin extensions like SEO, Social sharing, theme specific settings, and security. We’ll go through some practical examples for each of these in the coming sections once we upload our theme and start working with content. For a more detailed overview see our Introduction to WordPress for Beginners guide or video. The menu is fully responsive, meaning that as the screen size gets smaller, the menu adjusts to remain accessible on all types of devices. Lastly, the main Admin area serves as our primary workspace, and adjusts depending on what’s selected from the admin menu. I’ll draw your attention to the screen options tab in the top right corner. When you open this tab you’ll see a list of options and features that are available for display depending on which page you’re on. Similarly, the help tab to the right, shows you helpful hints for the page that you’re on, as well as links to relevant documentation. The first thing I always do when setting up a new WordPress website or blog – is adjust the global WordPress settings – so let’s do that first. To do that – from your dashboard, hover over settings in the admin menu and you’ll see the six default WordPress settings (with certain themes, and additional plugins you will have access to additional options here), however the default global settings will be: General, Writing, Reading, Discussions, Media, and Permalinks Let’s go into: General – At the top this is where you manage your sitename and tagline – you can also manage this from within your theme customizer which we’ll cover off in a bit. The WordPress address and site address are more advanced options and relate to the location of the WordPress software. Changing these can bring your site down if you don’t know what you’re doing, so we’ll leave these as is. Below that is the admin email address, and by default is the email address you used to set up WordPress. This email will be used to notify the admin user of any changes on your site such as automatic updates, registration of new users, etc and can be amended here. The membership option allows anyone to sign up to your site and while it has very specific function, is a dangerous option – I would encourage you to leave this unchecked. Below that you can choose the default user role, Choose your site language, timezone, preferred date and time format, and which day of the week your week starts on. If you commit any changes you will need to hit ‘save changes’ at the bottom. The next tab is Writing – The writing settings allows you to set a default post category and post formatt for your blog posts. You can also enable the ability to post to your blog via email – which is a cool feature, but not something we’re gonna cover in this video. On the Reading tab – you are able to amend the front page display, or set the homepage for your website. For most websites you will want to design and set a specific page as your homepage, which you would do by selecting the static option and choosing the page you want from the dropdown. Later we’ll upload demo content to set one of the homepage layout options, from the demo site. Alternatively, you can create a new page, use the builder and shortcodes to create the layout you want and set that as your homepage. We’ll set our ‘Posts page’ as our blog page, which we’ll do later. This is the default page where our blog posts will show up. You can also manage this from within the theme customizer. Next you can amend how many posts are showing on index pages of your website (which would include the front page, any category or archive pages, etc), by default this is set to 10 but you can show as many as you like. The next option relates to syndication feeds or RSS feeds and isn’t something you’ll likely use. The next option relates to what’s included in a feed and isn’t overly relevant. At the bottom however is the ‘Discourage search engines from indexing this site’ checkbox which prevents your website from being indexed by search engines like Google – you’ll leave this checked while you build out your website, but once you’re ready to go live, you will want to uncheck this box so that your website can be found on Google, etc. Once again, save any changes. Discussions – is where you manage how and when comments are displayed on your site and how users can interact with comments. These will change depending on your preferences, however i’ll walk you through my prefered options and explain the reasons why. For more info review the documentation for each option under the help tab. Under ‘Default article settings’ I’ll uncheck the first two options, as these are legacy options and have been somewhat exploited. I’ll leave ‘Allow people to post comments on new articles’ checked – if you want to disable comments on future posts, you would uncheck this option. Next is ‘Other comment settings’, I highly recommend leaving ‘Comment author must fill out name and email’ checked, this will help avoid spam comments and general trolling. The following option relates to the membership option we talked about earlier and if the user needs to be registered to comment. You can prevent people from commenting on old content by closing comments on articles older than a specific number of days. You can enable nested replies to comments and set the level of depth. I typically leave this as is, but check and see what this looks like on smaller devices as it can impact the design and user experience if there’s too many levels of nested comments. If your site gets lots of comments you will want to enable the ‘Break comments into pages’ options so that your page doesn’t become overrun with comments. Lastly you can change the order of how comments appear (with oldest or newest at the top of the page) You then have some notification settings which you can amend to suit you. Below that, you can choose to either approve every comment manually – which may be tedious but will ensure you only post welcomed comments, and/or the option automatically publish comments, provided the author has a previously approved comment. I would advise against turning both of these off as your site will likely be inundated with spam. I tend to leave the settings as they are. Next you can automatically hold comments for moderation if they have greater than x number of links in the comment. Links are indicative of spam comments and I generally set this to 1, however you can leave this as is or set it to whichever you want. At the very bottom of the page, you can amend the comment avatar settings, set the rating for the avatar and pick the default avatar that is display whenever someone commenting doesn’t have an avatar linked to their email. If you’ve made changes, select ‘save changes’ Media settings are where you can amend the default image sizes that are created by WordPress anytime you upload an image. Generally these settings won’t need to be amended, and if it’s required your theme documentation will advise you on the appropriate settings. By default WordPress organizes your media uploads by date, to amend this you can uncheck this box, otherwise leave this checked. Finally, permalinks – is a setting we will adjust as it relates to how permalinks or urls are created on your website. By default this is set to day and name to reflect a journal, however I would recommend changing this to Post name, as it will clean up your url string to represent the post, page or product name, this is the most common option, and is arguably a preferred option from an SEO perspective. Set whichever you want, but try not to change this later as it will break your urls. We’ll leave the category and tag base blank. Remember to save any changes. Now that we’ve covered of the global WordPress settings let’s quickly look at how you manage users on your site. You may want to set up contributors to your website or bring on an editor or shop manager, and the users panel is where you do that. From your dashboard hover over users in the admin menu. Your options are to view all users, add a new users, or view your profile. Let’s view your profile first, this is the admin user that was created when you installed WordPress. At the top you can disable the visual builder for posts and pages, when you’re new to WordPress and don’t have a working knowledge of HTML, I would keep this unchecked. Next you can amend the color palette for your backend dashboard. You can enable keyboard shortcuts (to learn more click on the link). Below that you can disable the toolbar when viewing your site. Here you can add your first name, last name and nickname, and then choose how you want that name to be displayed publicly, as the author of published posts. Next is the email address associated to this user account. This is also associated to your gravatar (which we will set up in a moment), you can enter in an website address associated with this user account. Below that you can add an author bio that will typically show up under the post along with the user gravatar, and display name chosen above, to set your gravatar image open the link here in a new tab, you will need a wordpress.com account for the email address used for the user account. If you have one – login, otherwise, create a free account. You’ll be sent a confirmation email, to activate your account follow the link in the email and then sign into gravatar. Click on add a new image and choose the image source, I’ll choose to upload one, crop the image if needed, set the rating for your avatar and you’re all set. Below that are some general account management settings related to your password and account access. And underneath that, you can add links to the author’s social media profiles, which will be display under posts with the name, bio, avatar, entered above. Lastly, there are the author signature or sign off options. Remember to click update profile to save changes. Under the All user tab you can view and manage all users of your website, including deleting users, and amending access levels and user settings by clicking into the user account. That will take you to the screen we were just on for that specific user. To create new users, click on add new. Enter the new users information, and set a password. You can choose to send an email notifying the user of their new account. The most important option though is selecting the user role. Subscriber is the lowest level of access and are created when someone registers to your website. With this access level they can only see and manage their own user profile. Contributor is the next level of access. They only have the ability to create, edit and delete their own unpublished posts. They are unable to upload or edit media, publish posts, or edit or delete their posts once they’ve been published. Author has the same access as the contributor, except that they are able to publish edit and delete published or unpublished posts as well as upload media items. Editor has a greater level of access including all of what an author has, plus the ability to view, edit, delete and publish other user’s posts, as well as manage categories and comments. Administrator is the highest level of access and has full control over your site. Be careful who you assign this role too, as they have the ability to change everything, including user accounts. When we install the WooCommerce plugin, two new user types will be created: Customer – which is created when someone checkouts on your site. This allows them to view order details, and order history. As well as edit their customer account. The other user type is Shop Manager – If you decide to hire someone to manage your online store, you can assign this access level. This will allow them to edit the WooCommerce settings, products, and view WooCommerce reports. It has the similar level of access as the Editor role. Once you’ve chosen the appropriate level of access and entered the user’s information. Select, ‘Add new user’. What will have the biggest impact on the appearance of your WordPress website or blog, is the theme you decide to use. A WordPress theme will take the content that you create, and represent it in dramatically different ways. So What is a WordPress theme? A WordPress theme is a group of files that work with the underlying WordPress software to enhance the design and functionality of a WordPress website or blog. To learn more review our what is a WordPress theme article. To find the right WordPress theme for your project, check out the WordPress theme reviews section of the OHKLYN blog to see the best rated themes by niche, or view our article on the best WordPress theme providers & marketplaces. There are both free themes and premium themes that you can use for your website. The main benefits of using a premium theme is access to support, the inclusion of more extensive theme documentation or instructions, extended functionality, and access to demo content (and often a one-click demo content importer). Which for around $50-100 is good value. Premium support packages can cost $50/mth, so the fact that this is included in a premium theme, makes it a smart investment. Let’s cover off the steps of how to install a WordPress theme. Firstly, we’ll cover the steps of how to install a free WordPress theme From your WordPress dashboard, navigate to ‘Appearance’ > ‘Themes’. Select ‘Add new’, and either search or browse for the theme you want. Once, you’ve selected the theme you want to use, select ‘Install’, then ‘Activate’. Now let’s take a look at the steps to upload and install a premium WordPress theme The first step is to purchase and download your premium WordPress theme, this will be in the form of a .zip file. We recommend either Themify, Elegant Themes, or CSSIgniter (there’s discount links to each provider in the description below, or on the OHKLYN post here). Sign up to our newsletter to get a 30% discount for Themify and other providers. Once you’ve downloaded your theme (a .zip file), from your WordPress dashboard, navigate to ‘Appearance’ > ‘Themes’, and choose ‘Add new’. Select ‘Upload theme’, click on ‘Choose file’, and navigate to the .zip file for your theme you want to upload. Hit ‘Open’, then ‘Install Now’. Once it’s done you’ll get a confirmation message stating that the theme installed successfully. Select ‘Activate’, and you’re all set. Let’s take a look at plugins and how you can extend the core functionality of your website. As we mentioned, a WordPress theme will have a significant impact on the look and feel of your website, as well as its core functionality. However, you are able to add additional functionality to your WordPress website or blog by installing plugins. Ok, so what is a WordPress plugin? Plugins are used to extend the core functionality of WordPress, allowing you to customize your site in a number of unique ways. The most common plugins include contact forms, chat, security, eCommerce, SEO, caching, and performance plugins. Let’s take a look at how to install a plugin From your WordPress dashboard, hover over Plugins in the admin menu, and select ‘Add new’. Search for the plugin you want to install, or upload a plugin, by selecting ‘upload plugin’ and selecting the plugin file. Hit ‘Install now’. Then, ‘Activate’. With plugins, you always want to check the number of active installs, the star rating and of reviews, when it was last updated, and if it’s compatible with your version of WordPress. Ok, so we’ve covered off on the more technical aspects of setting up and configuring a WordPress website or blog, let’s take a look at how to create and manage content in WordPress. To do this we’ll explore the content management section of the admin menu, starting with Posts Posts are used to publish any ‘blog content’ and are associated with a category or grouped within a specific topic. By default, posts are displayed in reverse chronological order with the more recent and relevant content visible immediately for users. If you hover over posts, you’ll see the four default WordPress options are All Posts, Add new, categories and tags. All posts is where you’ll manage your posts and we’ll take a good look in there in a minute. Add new is how you create a new post. You can also create a new post by hovering over new at the top within the WordPress toolbar, and selecting post. Categories is where you’ll create and manage the categories for your blog. We can also create new categories from within the post editor which i’ll show you in just a moment. Categories can be hierarchical, for example you might have Fashion as a category and casual as a sub-category of Fashion. Tags allow you another way adding commonalities to posts. Let’s click on All Posts and dig a little further into posts. At the top, you have the option to create a new post by clicking on Add New. Below that you have the count of posts by status – initially you will only see ‘All’, and ‘Published’ – as you create more content you will often have posts in other statuses like ‘Draft’, ‘Pending review’, ‘Sticky’ which means a post that you’ve pinned to the top of your blog, or ‘trash’. Below that you can bulk edit or trash multiple posts. And next to that, is the option to filter posts by publish date or category. Below that is your posts workspace, which shows all your current posts. From left to right you will see the post title, author, category or categories, tags, comments, and published or last modified date. Plugins and certain themes will add additional columns to this section. With a clean WordPress install there will be the dummy ‘Hello Word’ post. To edit a post you can either click on the post title or the edit button to jump into the edit post section. You can also select quick edit, which allows you to edit certain aspects of the post such as the title, categories, status, etc. To delete a post, select ‘trash’. This doesn’t permanently delete a post, rather moves it to the ‘Trash’ status or folder. If you click on the trash link here, this will take you to all posts that have been trashed. From there, you can either permanently delete the post or restore the post, which will revert the post back to it’s previous status. Let’s jump into the post and run through the available options. This is the same view you’ll see when you create a new post. On the left hand side of the workspace is where we manage the post content and the right hand side is where we manage the post admin info and post meta. We’ll start with the left hand side. At the top of the workspace is where you enter the post title. The permalink below that is automatically created once you enter the post title – this can be amended by selecting edit. Below that is the main content panel and looks very similar to a word processing application, and functions very similarly. Some themes will be equipped with a page builder to help create more diverse layouts. When it comes to adding blog content, there are two views the visual editor which tries to replicate what it will look like on the front end of your site, and the text editer which stripes all the formatting and relies on HTML to markup or format the text. The visual editor is most common for beginners, whereas the text editor is preferred by those with a working knowledge of HTML. In the visual editor you have a row of formatting options, similar to what you’ll find in a word processing application – you can expand the options available by clicking this toggle here. You can either create and format your content directly within the post editor or copy in content from an external document like Google docs. If you copy and paste content in you will need to paste the content in as unformatted text. To do this, either select the ‘paste as text’ option from the formatting bar, or paste it directly into the text editor tab. If you prefer to write your posts from within WordPress, you can leverage the distraction free writing tool by clicking this button here to help keep you focused. Let’s quickly run through your options for formatting your text. To add headings or titles to your post you’ll highlight the text and select which level of heading it is. Typically there should only be 1 heading One or H1 per page which should be reserved for your main blog title. Headings should be used to specific hierarchy, and not as formatting. You can Bold and italics text which in HTML is referred to as strongly emphasize, and emphasize the text. You can add bullet or numbered points to text, in HTML this is referred to as adding an unordered or ordered list. For an unordered list the shortcut is to type a ‘-’ followed by the text. Similarly the shortcut for an ordered list is to type ‘1. Space’ followed by the text. To add another level of lists within a list, you can use the increase indent and decrease indent buttons on the bottom row. To add a blockquote or pull a quote out of an article, you can click on the paragraph you want to wrap in a blockquote and select the block quote option. You have your alignment options here and next to that is the link options. To add a link to some text, simply select the text you want to make clickable (this is referred to as the anchor text), and select the insert/edit link button. You can then type or paste in the destination url or link address and hit apply. Or click on the settings icon for more options. If you want the link to open in a new tab, check the ‘open link in a new tab box’. To remove a link, click on the link and hit the remove link button. You can truncate posts by including a ‘read more’ tag at any given point within the article, this will control how much of the post is show on index pages and adds a read more button. This can also be controlled globally by your WordPress theme or via the post extract within the post editor. On the bottom line, you can add a strikethrough your text which edits out the text but doesn’t delete it. You can add a horizontal line to a post to break up the content before the line from that which follows. You can also amend the text color (although I wouldn’t recommend doing it here for an entire post). You then have paste as text, which we’ve talked about, Clear formatting, add special characters, indent content, and undo and redo your edits. To add images to your posts, click or move your cursor to where you want to insert the image and select, ‘Add media’, upload or select the image you want to insert. You can add a title, caption, alternative text, and a description to your image (different themes will treat this information differently), and select ‘Insert into post’. If you click on the image, you can change the alignment, or click on the ‘edit’ icon to access more options. Within the display settings, you can select the image size and amend the link attributes of the image. Under the advanced options, you can add a CSS class, have the link open in a new tab, and amend other attributes. To add an image gallery, click or move your cursor to where you want to insert the gallery and select, ‘Add media’, Then choose ‘create gallery’. Upload or select the images you want to include in your gallery, and click ‘create a new gallery’. To amend the order of the images in your gallery, simply click and drag and drop them in your prefered order. In the gallery settings on the right, ‘link to’ media file, and choose your desired image size. Once you’ve done that, select ‘update gallery’ – with the theme that we’ll upload next, that will mean that when you click on an image within an image gallery, it will open in a navigable lightbox. Ok, so that’s how you’ll add and format content Below the content editor there are currently no more fields, however under screen options at the top right, you can amend the visibility of addition fields that are used to control things such as the post excerpt, author, etc. On the right hand side, you have your Publishing panel – Here is where you manage the status and visibility of a post. When you create a new post, the default status will be set to draft. Once you’ve added content to a new post, or made edits to an existing post, you can either save the post as a draft (if it hasn’t been published already), or preview the post to see what your post would look like from the front end. The visibility options allow you to set a post to private, password protected, or public. You can also choose to make a specific post sticky, meaning it gets pinned to the frontpage of your blog. When you’re happy with the post, you can select Publish to enable to post to be visible on your site. You can also schedule a post to be published at a specific future date and time. The timezone is based on the time and date settings configured in your general settings tab. The next panel is the Post format panel – This allows you to specify which type of post it is and style it differently. This isn’t supported across all themes and can be confusing, but when used effectively, is a nice feature. The Categories panel – allows you to assign a post to a category or categories. You can also create new categories from within this section by clicking on ‘add new category’, entering in the new category, selecting whether or not it has a parent category, and selecting ‘Add new category’. In the tags panel below – you can add as many comma-separated tags to your post as you like, by entering them in and selecting ‘Add’, to remove tags, simply click on the ‘x’ next to the tag you want to remove. Lastly is the Feature image – To add a feature image select ‘Set feature image’ and either choose an image from the media library or upload a new image. When you’ve chosen the image you want select, ‘set feature image’. To remove the feature image click remove feature image, and to change the feature image, click on the feature image and upload or select a new feature image from the media library. Ok, so that’s pretty much what you need to know about creating and managing posts, let’s take a quick look at pages. Pages in WordPress are used for more permanent and timeless content that is likely to remain relevant for a longer period of time – For example, a homepage, about, or contact page. To create or manage pages, from your WordPress dashboard, hover over ‘Pages’ in the admin menu > From here you can either view all pages, or add a new page. For now we’ll click on view all pages. The workspace will look similar to the posts section we just covered. Let’s take a look at the sample page that’s been created. You will use the title, permalink and content panel on the left the same way as you will for posts. Or you’ll use one of the page builders that’s compatible with your WordPress theme. On the right hand side, the publish panel and feature image panel are also very similar. The page attributes panel, however is unique to pages. Parent allows you to select a parent page to establish site hierarchy. Depending on which theme you have activated you may or may not see the page templates drop down. Page templates alter the appearance of pages such as including or removing sidebars, etc. I would recommend going through the various page templates within your theme to understand what each template is doing. The order box allows you to set the order of how pages appear on the ‘All pages’ tab. The media section of WordPress is where you upload all your images and other assets. It is recommended that you avoid uploading videos to your WordPress media library. Instead, use a video hosting service like YouTube or Wistia, as hosting videos on your WordPress host will drain your resources. We’ve already covered how to add media items directly to your post and pages, however you can also upload media by hovering over ‘Media’ > and selecting ‘Add new’. From there you can either select the files from your computer or simply drag and drop them into the media library. Try to keep image files as small as possible (typically no larger than 500kb) as larger files will slow down your sites page load speed. If you want to upload a pdf document or something similar for users to download from your website, you can also add it in here. Once you’ve uploaded the document select the url and use that as the link destination when you insert a link to text or an image from within your posts or pages. Let’s go and take a look at Menus, which are the primary vehicle for users to navigate through your WordPress website. By default every theme will have a primary navigation menu, however, many themes will provide you with the option to have multiple menus, as well as mega-menus which are very popular in certain types of blogs, websites, and online stores. To create and manage menus, from your WordPress dashboard, navigate to ‘Appearance’ > ‘Menus’, alternatively you can also do this via the theme customizer, by navigating to the menus tab under ‘Appearance’ > ‘Customize’. The menus page is broken down into two tabs, edit menus, which is where you manage the content within your menus, and manage locations, which is where you can assign a menu to a specific menu location. To create a new menu, click on ‘create a new menu’, give it a name (pick something that makes sense to you), below that you’ll have your menu structure. You will first need to set a menu location for the newly created menu, which is down the bottom, depending on the theme you’re using, you may have various options. If you’re creating a new menu, or managing an existing menu, you’ll need to add menu items from the available options on the left. To add pages to your menu, check the boxes next to the pages you want to add, and select ‘add to menu’. You can do the same thing to add posts, custom links, and categories. Under the screen options panel at the top, you are able to add more options, like products, tags, product categories, and product tags (if WooCommerce is enabled). You also have the ability to add CSS classes to individual menu items, as well as set the link target, which allows menu items to be opened in a new tab. Once you’ve added your menu items, select create menu, or save menu to amend an existing menu. To learn more check out the WordPress codex on menus. We’ll look at Widgets. Widgets are components designed to serve a specific purpose that you can add to any widget enabled area on your website (such as the sidebar or footer). Common widgets include the search widget, recent posts widget, categories widget, text widget (that enables you to add text, images, links, and certain code snippets). There are often also theme and plugin specific widgets which bring in additional functionality. You can access the widgets panel by navigating to ‘Appearance’ > ‘Widgets’, or via the theme customizer which can be located in the menu under ‘Appearance’ > ‘Customize’ and by navigating to ‘Widgets’. To edit the widgets, click on the dropdown arrow, update the widget information, and click save. Unlike the customizer, any changes you make will automatically be live on your website. To change the order of the widgets, drag and drop them into place. To remove a widget simply click on delete. And to add a new widget, drag the widget from the left and drop it into the widget area on the right. The last thing I’ll show you is the theme customizer, which you can access by hovering over ‘Appearance’ in the admin menu, and selecting ‘customize’. This will always be different depending on the theme you have installed, but is where you will amend the global theme settings, like your website header, colors, and various other layout and appearance settings. Many premium themes will also have a custom tab where you will be able to manage the theme settings with greater control, and more available options. And that completes our WordPress tutorial for beginners, to learn more about WordPress, or how to set up a WordPress website or blog, check out one of our free WordPress tutorials. For more information about anything we’ve covered in this post, review the WordPress documentation, or the help tab from within WordPress. Depending on which theme you’re using, the options will vary greatly, so remember to review the theme specific documentation, for the theme you’re using. Wow, Alan that’s interesting material on this website! well. In fact your creative writing skills has encouraged me to get my own site now. Nice of you to say so!
2019-04-19T06:58:40Z
https://imhawkeye.com/website-related/wordpress-tutorial-for-beginners/
Multiple myeloma (MM) is a tumor localized at various sites within the bone marrow (BM) [ 1 ]. With over 20,000 new cases diagnosed per year in the United States, MM represents 1 % of all cancers and approximately 10 % of all hematological malignancies. The median age at diagnosis is 65 years [ 2 ]. MM is clinically defined by the CRAB symptoms: hyperCalcemia, Renal insufficiency, Anemia and/or Bone lesions [ 3 ]. Autologous stem cell transplantation (SCT) in eligible patients, proteasome inhibitors and immunoregulatory drugs have substantially increased response rates and overall survival during the past two decades [ 4 ]. In spite of these tremendous improvements, MM remains a largely incurable disease with a median survival of 6 years. Myeloma cells are the malignant counterpart of plasma cells, which are terminally differentiated B cells. Antibody-secreting plasma cells differentiate from naïve B cells that have recognized a foreign antigen [ 5 ]. This takes place in germinal centers of secondary lymphoid organs, where B cells undergo proliferation and somatic hypermutations followed by the selection of B cells with high antigen affinity. Plasmablasts exiting the germinal center migrate to the BM where they find an appropriate environment allowing them to differentiate into mature long-lived plasma cells [ 5 , 6 ]. Similarly, myeloma cells depend on the BM microenvironment for their survival, growth and differentiation [ 7 ]. The primary function of long-lived plasma cells is the secretion of antibodies (immunoglobulin, Ig) that mediate humoral immunity against infections. In contrast to normal plasma cells, myeloma cells secrete monoclonal Ig (M-proteins) which are central to disease pathogenesis and serve as diagnostic marker detectable in the blood and urine of MM patients. MM is a multistep progressing disease that starts with an asymptomatic premalignant lesion called monoclonal gammopathy of undetermined significance (MGUS). MGUS is present in 1 % of the adult population and progresses to malignant MM at the rate of 1 % per year [ 7 ]. Although MM develops in the BM, late stages may involve a loss of BM-dependency and the development of extramedullary tumors in the blood, liver, spleen, lymph nodes, pleural fluid and skin [ 8 ]. When a high percentage (>20 %) of malignant plasma cells is detected in the blood, the disease is then called plasma-cell leukemia. Malignant plasma cells arise from successive genetic lesions [ 9 ]. Early immortalizing events likely occur in germinal centers and involve translocations between Ig enhancers and oncogenes. Subsequently, secondary translocations activating proliferation and survival pathways contribute to increased tumor growth and extramedullary spread. Yet, the factors determining the progression from a premalignant MGUS stage to active myeloma are not well understood. Microarray expression analysis has revealed a large number of genes differentially expressed between plasma cells of healthy donors and those of MGUS/MM patients; but very few genes could distinguish MGUS from MM plasma cells [ 10 ]. Along with genetic changes in plasma cells, the BM microenvironment is believed to play a crucial role in disease progression to symptomatic myeloma. Immune cells are important components of this microenvironment. Here, we review the importance of the immune network in promoting or controlling myeloma growth. We describe the interactions between the different members of the immune system, the BM stroma and the myeloma cells. Finally, we discuss various strategies implemented to trigger the immune elimination of myeloma cells. MM develops in the BM, a well organized tissue residing in the cavities of bones. In adults, the BM is the primary site of hematopoiesis, the process by which hematopoietic stem cells give rise to the different types of blood cells including erythrocytes, megakaryocytes, platelets and immune cells. Besides providing hematopoietic stem cells with the specific microenvironmental niches required for their maintenance, proliferation and differentiation [ 11 ], the BM is also the primary residential site of plasma cells [ 5 , 6 ]. Factors provided by highly specialized niches within the BM allow plasma cells to survive for years, even for decades. It is postulated that the same factors support the growth of myeloma cells. The BM microenvironment consists of a cellular compartment, the extracellular matrix and soluble factors such as cytokines, chemokines and growth factors [ 12 , 13 ]. BM residing cells can be subdivided into hematopoietic cells including immune cells, and non-hematopoietic cells such as stromal cells, adipocytes, osteoclasts, osteoblasts and components of the vasculature. Complex interactions between immune, non-immune and malignant myeloma cells influence MM progression (Fig. 1). The crucial role of immunity in the development and pathology of MM are the main focus of this review. The BM is a primary organ of hematopoiesis and therefore contains hematopoietic stem cells and progenitors of the myeloid and lymphoid lineages [ 11 ]. Myeloid cells such as monocytes, macrophages, dendritic cells (DCs) and granulocytes develop in the BM, are rapidly recruited to damaged or infected tissues and play major roles in early immune responses [ 14 ]. Their immature precursors may participate to MM pathology by favoring the proliferation of malignant plasma cells [ 15 ]. Monocytes differentiate into inflammatory macrophages or monocyte-derived DCs. Macrophages are phagocytes that contribute to inflammatory and healing responses through the secretion of diverse cytokines. Abundant in the BM of MM patients [ 16 ], macrophages have been shown to support the proliferation and the survival of myeloma cells [ 17 ]. DCs have major functions in the initiation and orientation of adaptive immune responses. Indeed, to acquire effector function, naïve T cells need to be ‘educated’ by antigen presenting cells (APCs). DCs are professional APCs that display antigens on major histocompatibility complex (MHC) molecules and deliver the appropriate signals (co-stimulation and cytokines) necessary for T cell activation. Distinct subsets of DCs harbor specific antigen presenting and immunoregulatory capacities [ 18 ]. The BM contains progenitors of conventional DCs as well as developing and mature plasmacytoid DCs (pDCs) [ 19 ]. In addition, circulating conventional DCs can migrate back to the BM where they may stimulate T cell proliferation [ 20 ]. Such local activation of T cell responses may have considerable impact on the T cell mediated control of MM. Granulocytes are subdivided into neutrophils, eosinophils and basophils [ 14 ]. Neutrophils are functionally impaired in MM patients [ 21 ], while eosinophils promote human and mouse myeloma cell growth [ 22 , 23 ]. The lymphoid lineage comprises innate lymphoid cells (ILCs) and T and B lymphocytes. ILC progenitors, including the common helper ILC progenitor [ 24 ], group 2 ILC-restricted [ 25 ] and natural killer (NK) cell-restricted progenitors [ 26 ] are present in the BM. So far, NK cells have been the most studied members of the ILC family. NK cells play an important role in cancer immunology due to their capacity to directly recognize and kill tumor cells [ 27 ] and have received particular interest in MM [ 28 ]. B and T lymphocytes mediate adaptive immunity. Adaptive immune responses are antigen-specific but develop slower than innate responses carried on by myeloid cells or ILCs. An important feature of adaptive immune responses is the memory that allows faster and more potent responses following a subsequent encounter with the same antigen. Memory T cells certainly play a crucial role in controlling dormant tumor cells and preventing relapse [ 29 ]. B cells complete their development in the BM whereas early T cell progenitors leave the BM to achieve their development in the thymus. In addition, the BM cellular compartment typically contains less than 5 % plasma cells [ 6 , 30 ] and 1–5 % of re-circulating mature T cells [ 31 , 32 ]. There are different subsets of T cells. CD8 T cells are cytotoxic lymphocytes that eliminate tumor cells in an antigen-specific manner. Memory CD8 T cells preferentially home to the BM where they undergo basal proliferation allowing the maintenance of a cytotoxic memory [ 31 ]. CD4 T cells, also called helper T cells (Th), secrete cytokines that regulate immune responses [ 33 ]. They also help CD8 T cells functions as well as B cell differentiation into long-lived antibody-secreting plasma cells. Depending on the signals they receive, naive CD4 T cells differentiate into different helper lineages with distinct cytokine secretion profiles. For instance, Th1 cells mainly produce interferon (IFN)-γ whereas Th2 cells secrete IL-4, IL-5, IL-10 and IL-13. Regulatory T cells (Treg) are another CD4 T cell subset characterized by the expression of the Foxp3 transcription factor. These cells down-regulate immune responses and are often, albeit not always, associated with poor outcome in cancer patients [ 34 ]. Interestingly, the BM is particularly rich in Tregs, which represent 25 % of CD4 T cells in this organ [ 35 ]. High numbers of immature myeloid cells and Tregs in the BM indicate a tolerance-prone microenvironment that may hamper the development of protective immune responses against MM. Even though extramedullary disease is detected in 7–18 % of newly diagnosed MM patients [ 36 ], myeloma cells are believed to be strictly confined to the BM in the early stages of the disease [ 7 ]. The observation that MM cells do not proliferate when cultured alone highlights the strong dependency of these cells on microenvironmental factors [ 30 ]. The chemokine stromal derived factor-1 (SDF-1 or CXCL12) is a key regulator of myeloma cell homing to the BM [ 37 ]. CXCL12 is produced by BM stromal cells (BMSCs) and interacts with CXCR4 on myeloma cells. In addition, the retention of myeloma cells within the BM is ensured by a range of interactions between myeloma cells, the BMSCs and the extracellular matrix. For instance, syndican-1 (CD138), CD44, CD38 and integrins expressed by myeloma cells bind to various components of the extracellular matrix and serve as major anchors mediating physical interactions between malignant plasma cells and the solid textures of the BM [ 37 ]. Importantly, these receptors do not only mediate adhesion but also initiate signaling cascades within myeloma cells that contribute to their proliferation and survival. The cytokine IL-6 is probably the most important factor sustaining MM growth in the BM [ 38 ]. In fact, the loss of IL-6 dependency observed in advanced disease stages may facilitate the colonization of extramedullary sites by myeloma cells [ 39 ]. It is interesting to note that normal and malignant plasma cells respond quite differently to IL-6 stimulation: IL-6 increases Ig production by normal plasma cells but stimulates proliferation and resistance to apoptosis in MM cells [ 40 ]. BMSCs as well as T cells, B cells, monocytes or myeloma cells themselves produce IL-6 [ 41 ]. BMSCs are considered as the predominant source of IL-6 in MM [ 42 ]. Still, the role of immune cells in IL-6-driven myeloma pathology should not be neglected. For instance, macrophages promote the proliferation of human MM cells in an IL-6-dependent manner [ 17 ]. Interestingly, mouse eosinophil-derived IL-6 contributes to the maintenance of long-lived plasma cells in the BM [ 43 ]. Similarly, human eosinophils have been shown to enhance the proliferation of MM cell lines, even though the IL-6-dependency of this phenomena was questioned [ 22 ]. Despite being a key myeloma growth factor, IL-6 blockade with monoclonal antibodies has shown disappointing results in MM patients when administered with conventional chemotherapeutics [ 44 ]. This suggests that IL-6 inhibition is likely to be redundant for disease control during currently standard therapies. B-cell activating factor (BAFF) and a proliferation-inducing ligand (APRIL) are related members of the TNF superfamily whose receptors are expressed on B cells at different stages of differentiation [ 5 ]. BAFF is necessary for the early stages of human plasmablast differentiation whereas long-term survival of plasma cells is APRIL-dependent [ 6 ]. Human primary myeloma cells express receptors for BAFF and APRIL [ 45 ]. Addition of these growth factors to an IL-6-deprived milieu rescues IL-6-dependent myeloma cell lines from apoptosis [ 46 ]. In the myeloma-infiltrated BM microenvironment, monocytes and neutrophils are the mains source of BAFF, while APRIL is produced by monocytes and osteoclasts [ 47 ]. Moreover, APRIL is expressed by mouse BM eosinophils that support normal plasma cell survival [ 43 ], but the role of APRIL production by eosinophils in MM requires further investigation. Although IL-6, BAFF and APRIL are probably the key proliferation and survival factors for myeloma cells, other factors contribute to the MM pathology. In late MM stages, insulin-like growth factor (IGF)-1 may drive proliferation and survival of IL-6-independent myeloma cells [ 40 ]. Additional growth factors and cytokines promoting MM growth include G-CSF, GM-CSF, SCF, TNF-α, HGF, IL-3, IL-10, IL-15, IL-17, IL-21, vascular endothelial growth factor (VEGF) and osteopontin [ 8 , 12 , 48 ]. The BM provides survival niches for both normal and malignant plasma cells. In MM, malignant plasma cells hijack the diverse components of this microenvironment to further sustain MM growth and development. For instance, myeloma cells induce BMSCs, osteoblasts and immature myeloid cells to produce IL-6, thereby promoting their own proliferation [ 8 , 15 , 49 ]. Importantly, myeloma perturbs normal bone remodeling, promotes angiogenesis and causes immune deficiencies. Bone destruction is a key pathological feature of MM. Development of focal lytic bone lesions or diffuse osteopenia leads to spontaneous fractures and increased calcium release that largely contribute to MM morbidity and mortality [ 40 ]. A study performed in a humanized severe combined immunodeficient (SCID) mouse model suggested that bone remodeling might also contribute to MM progression [ 50 ]. This process involves multiple interactions between myeloma cells, BMSCs, and bone forming and resorbing cells and their progenitors. In physiological conditions, osteoclasts clear away old bone tissue while osteoblasts create new bone. In MM, the balance between bone resorption and bone formation is disturbed. Myeloma cells inhibit osteoblast differentiation and promote osteoclast differentiation and activity [ 13 ]. Several factors have been involved in myeloma-induced osteolysis. Interactions between the receptor activator of NF-κB (RANK) and its ligand (RANKL) are thought to play a crucial role in this process [ 40 ]. Even if immune cells have been poorly investigated in the context of myeloma bone disease, few studies indicated a pivotal role of T cells. In MM patients, T cells are the main source of IL-3, a cytokine that triggers osteoclast formation while blocking osteoblast formation [ 51 ]. In addition, human MM cell lines induce RANKL expression on T cells, thus favoring osteoclastogenesis [ 52 ]. Interestingly, IL-17-producing T cells were shown to induce osteoclast activation and the levels of IL-17 were found to directly correlate with lytic bone disease in MM patients [ 53 ]. Thus, T cells significantly contribute to myeloma-induced osteolysis. Further work should determine whether other members of the immune system are also involved. Angiogenesis is increased in patients with active MM, in comparison with MGUS or smoldering MM patients [ 54 ] and BM microvessel density has emerged as an independent prognosis factor in myeloma [ 55 ]. Inflammatory cells recruited and activated within the tumor microenvironment trigger the angiogenic switch [ 56 ]. In particular, MM-associated macrophages were shown to promote neovascularization [ 57 ]. Intriguingly, in solid tumors, vessels derived from neoangiogenesis show impaired structure and function, thus influencing leukocytes recruitment from the blood [ 58 ]. In these settings, angiogenesis blockade could reverse immunosuppression. The role of angiogenesis on the immune composition of the BM microenvironment remains to be investigated in MM. It is widely recognized that MM patients have greater susceptibility to infections and secondary malignancies [ 59 ]. Immune dysfunctions are the consequences of both niche-occupancy and direct immunosuppression by malignant plasma cells [ 60 ]. Specific immune deregulations and their impact on anti-myeloma responses will be discussed later. Reciprocal interactions involving myeloma cells and the BM milieu contribute to the resistance to conventional chemotherapeutic agents. Therefore, novel therapeutic approaches aim to target not only the malignant cells, but also myeloma cell-stromal cell interactions and the BM microenvironment [ 51 ]. Response of myeloma cells to conventional therapies, such as glucocorticoids or cytotoxic chemotherapeutics, is attenuated by the presence of BMSCs [ 40 ]. The concept of cell-adhesion mediated drug resistance (CAM-DR) was first introduced in 1999 to describe the role of fibronectin-adhesion in protecting myeloma cells against apoptosis when exposed to cytotoxic agents. Fibronectin or BMSCs induce CAM-DR to a variety of drugs (e.g. bortezomib, vincristine, doxorubicin and dexamethasone) and integrins expressed by malignant plasma cells play a key role in this process [ 37 ]. In addition to cell-to-cell contacts, soluble factors such as IL-6 contribute to myeloma cell resistance to chemotherapy [ 61 ]. The role of immune cells in drug resistance has been poorly investigated. Still, macrophages were found to protect myeloma cells from dexamethasone-, melphalan-, bortezomib- and doxorubicin- induced apoptosis [ 16 , 62 ]. Furthermore, hematopoietic stem cell niches in the BM might promote the survival of MM stem cells. Even if the concept of cancer stem cell in MM remains controversial, these cells have been proposed to be the root cause of drug resistance [ 63 ]. Inflammation is one of the hallmarks of cancer [ 64 ]. It has been well established that the inflammatory microenvironment facilitates proliferation, invasion, and metastasis of malignant cells in solid tumors [ 65 ]. In this context, myeloid cells are key inflammatory mediators that produce proinflammatory cytokines through recognition of diverse pathogen-associated molecular patterns (PAMPs) or damage-associated molecular patterns (DAMPs) by their pattern-recognition receptors [ 66 , 67 ]. Recently, subpopulations of tumor-associated myeloid cells have gained prominence due to their immunosuppressive functions. These cells include myeloid-derived suppressor cells (MDSC) and tumor-associated macrophages (TAM) [ 68 ]. Although these myeloid cells have been intensively studied in solid tumors, there is emerging evidence that they are key players in BM milieu of hematological malignancies including MM [ 57 ]. MDSC are heterogeneous immature myeloid cells which are characterized by a potent ability to suppress anti-tumor immune responses mediated by T cells and NK cells [ 68 , 69 ]. Under pathological conditions including cancer, perturbation of normal differentiation of myeloid cells leads to generation of MDSC, which is triggered by persistent exposure to tumor microenvironment-derived soluble factors such as stem-cell factor, GM-CSF, prostaglandins, IL-6, and VEGF [ 70 ]. MDSC are subsequently recruited into tumor site or lymphoid tissues in response to CCL2 [ 71 ], CXCL5 [ 72 ], and S100 proteins [ 73 ]. Initially, MDSC were identified in tumor-bearing mice as Gr-1+CD11b+ cells [ 74 , 75 ]. Phenotypically, MDSC can be divided into granulocytic subset (CD11b+Gr-1highLy6G+Ly6Clow G-MDSC) or monocytic subset (CD11b+Gr-1midLy6G−Ly6Chigh MO-MDSC) [ 68 ]. Accordingly, two subsets of MDSC possess different suppressive mechanisms: G-MDSC chiefly use reactive oxygen-species (ROS) such as hydrogen-peroxidase, whereas MO-MDSC use inducible nitric oxide synthase (iNOS) and arginase. Hydrogen-peroxide and iNOS-derived peroxynitrite inhibit T-cell receptor signal transduction [ 68 ], whereas arginase sequestrates l-arginine that is required for T cell proliferation [ 76 ], both of which dampen T cell-mediated anti-tumour immune responses in a cell-to-cell contact dependent manner. In addition to their direct immunosuppressive activities, MDSC are capable of inducing Tregs. Though the exact mechanism has not been fully understood, diverse molecules are reported to be implicated in the cross-talk between MDSC and Treg cells including arginase [ 77 ], CD40 [ 78 ] or cytokines (TGF-β, IFN-γ and IL-10) [ 79 ]. Moreover, MDSC stimulate tumor angiogenesis through secretion of MMP-9 or direct differentiation into CD31+ endothelial cells [ 80 ]. Thus, MDSC have multifaceted pro-tumor functions in tumor microenvironment. Recently, several studies have shown that MDSC are important players in myeloma-infiltrated immune microenvironments. In ATLN and DP42 murine myeloma models, both the proportion and the absolute number of G-MDSC and MO-MDSC in BM were significantly increased as early as 1 week after inoculation, and thereafter gradually decreased due to progressive expansion of myeloma cells [ 81 ]. Similar results were reported by another group in 5TMM models [ 82 , 83 ], suggesting that the expansion of MDSC is an early event in MM. Proinflammatory S100 proteins play a pivotal role in the accumulation of MDSC in solid tumors [ 73 ]. Notably, S100A9-deficient mice showed prolonged survival compared to wild type mice after inoculation of OVA-expressing DP42 cells, which was associated with a reduction of MDSC in BM and an increase in OVA-specific CD8+ T cells. Furthermore, the survival benefit in S100A9-deficient mice was abrogated by antibody depletion of CD8+ T cells or adoptive transfer of MDSC [ 81 ], demonstrating that MDSCs dampen CD8+ T cell-dependent anti-myeloma immune responses, leading to myeloma progression. In humans, Brimnes et al. [ 84 ] firstly reported that newly diagnosed MM patients have increased frequencies of CD14+ MO-MDSC in peripheral blood compared to healthy donors. On the contrary, recent studies showed that CD14−CD15+ G-MDSC, but not CD14+ MO-MDSC are increased in myeloma patients [ 81 , 85 , 86 ], while both subsets show the same level of suppressive activity against autologous T cells [ 86 , 87 ]. In addition to their immunosuppressive activities, MDSC can directly stimulate proliferation of myeloma cells. Görgün et al. [ 85 ] showed that co-culture with MDSC marked enhanced proliferation of myeloma cells in vitro. Importantly, in this study, co-culture of myeloma cells and peripheral blood mononuclear cells (PBMC) from healthy donors was able to induce generation of MDSC, providing evidence for bidirectional interaction between myeloma cells and MDSC. Moreover, Favaloro et al. [ 86 ] reported that MDSC from MM patients can markedly induce Treg after co-culture with PBMC. Thus, MDSC contribute to immunosuppressive, tumor-favoring environment, providing a potential target in myeloma therapy. It is now established that tumor tissues are abundantly infiltrated by TAMs, which support tumor progression by angiogenesis, matrix remodeling and potent immunosuppression [ 88 , 89 ]. In general, higher level of TAMs correlates with poor prognosis in many types of solid tumors [ 90 ] as well as hematological malignancies including MM [ 91 ]. In terms of ontogeny of TAMs, both tissue-resident macrophages and recruited macrophages coexist in tumor microenvironment [ 92 ]; however, recent studies have clarified that TAMs are phenotypically distinct from residential macrophages, and originate from circulating Ly6C+ inflammatory monocytes [ 71 , 93 , 94 ]. Colony-stimulating factor-1 receptor (CSF1R) signaling and CCL2-CCR2 interaction are implicated in the recruitment of monocytes to tumor microenvironment [ 71 , 95 ] where differentiation and functional maturation of TAMs are regulated by Notch signaling and environmental factors such as hypoxia [ 94 , 96 , 97 ]. It remains unknown whether myeloma-associated macrophages originate from circulating monocytes or BM residential precursors; however, massive BM infiltration by CD68+ macrophages are observed in active MM patients, but not in MGUS patients or healthy donors [ 16 , 98 ], indicating that macrophages represent a pivotal cellular component of the myeloma microenvironment. Myeloma-associated macrophages contribute to myeloma pathology by at least three different ways. Firstly, myeloma-associated macrophages support myeloma growth through cytokine production. Importantly, myeloma-associated macrophages highly express IL-1 and TNF-α [ 99 , 100 ], both of which stimulate production of IL-6 from mesenchymal stem cells (MSCs). Additionally, myeloma-associated macrophages secrete the anti-inflammatory cytokine IL-10, another growth factor for myeloma cells [ 100 , 101 ]. Until recently, it remained unclear whether or not myeloma-associated macrophages produce cytokines in response to specific PAMPs and/or endogenous DAMPs in the myeloma microenvironment. Hope et al. firstly showed that myeloma-associated macrophages contribute to the inflammatory milieu through toll-like receptor (TLR)-2/6-mediated recognition of its proteoglycan agonist, versican. Furthermore, they also found that genetic ablation of tpl2 (Cot/MAP3K8), a downstream effector of TLRs, delays myeloma progression in Vk*myc transgenic mice [ 100 ], highlighting the importance of this pathway. Another important function of myeloma-associated macrophages is angiogenesis and vasculogenesis. Angiogenesis within the myeloma microenvironment is amplified by a positive feedback loop of proangiogenic factors including VEGF, basic fibroblast growth factor (bFGF), TNF-α, and IL-6 [ 102 ]. In addition to secretion of these proangiogenic factors, myeloma-associated macrophages contribute to angiogenesis through a commitment toward an endothelial phenotype. Scavelli et al. [ 98 ] reported that exposure to VEGF and bFGF convert myeloma-associated macrophages into cells which are functionally and phenotypically similar to endothelial cells, leading to formation of capillary-like structures. Lastly, myeloma-associated macrophages support myeloma cells in a cell-to-cell contact dependent manner. Zheng et al. [ 16 ] reported that myeloma-associated macrophages protect myeloma cells from caspase-dependent apoptosis, which confers resistance to chemotherapy. Notably, IL-6 was dispensable in this mechanism. Instead, macrophage-mediated myeloma survival was depend on interaction between P-selectin glycoprotein ligand-1 (PSGL-1)/selectins and ICAM-1/CD18 which transmit survival signaling including Src, Erk1/2 and c-myc [ 62 ]. NK cells are ILCs which play a key role in tumor immunosurveillance [ 27 ]. They express a wide range of germline-encoded receptors that allow them to recognize stressed or unhealthy cells such as tumor cells. NK cells directly kill the target cells by releasing lytic granules containing granzymes and perforin or through the membrane death receptors TNF-related apoptosis inducing ligand (TRAIL) and Fas ligand (FasL). NK cells also secrete a large array of cytokines and chemokines, among which IFN-γ is known for its potent anti-tumor properties. In humans, NK cells are often characterized as CD3−CD56+ lymphocytes, which are further divided into two populations: CD56dimCD16+ and CD56brightCD16− cells [ 103 ]. Noteworthy, CD56 expression by malignant plasma cells represents an obstacle to NK cell analysis in MM patients, even if size parameters should allow the distinction between NK cells and myeloma cells [ 104 ]. The importance of NK cells for the control of myeloma progression has been demonstrated using NK cell-depleting antibodies in various mouse MM models [ 105 , 106 ]. Furthermore, several groups established the ability of human NK cells to kill MM cell lines [ 107 – 109 ]; and cytotoxic activity of autologous NK cells against patient-derived myeloma targets has also been reported [ 108 , 110 ]. A particularity of NK cells is their ability to sense cells that have down-regulated MHC class I molecules. Human NK cells express various combinations of killer cell immunoglobulin–like receptors (KIR) that deliver negative signals upon binding to MHC class I molecules, thus preventing reactivity against normal healthy cells [ 111 ]. MHC class I down-regulation is frequently observed in cancer cells. Accordingly, early stages myeloma cells express low levels of MHC I, and are readily recognized by NK cells [ 108 ]. In addition, myeloma cell recognition by NK cells involves various activating receptors including NKG2D, DNAX accessory molecule (DNAM-1 or CD226) and the natural cytotoxicity receptors (NCRs) NKp46, NKp30, NKp44 [ 107 , 108 ]. NKp46 is certainly a key receptor for NK cell recognition of malignant plasma cells because its inhibition strongly reduced NK cell-mediated killing of all the myeloma cell lines so far tested [ 107 ]. Human NKG2D binds to MHC class I related chain A and B (MICA/MICB) and to UL16 binding proteins (ULBP1-6). ULPB1-3 has been detected on some myeloma cell lines while high levels of MICA were observed on BM-derived MM cells [ 108 ]. The NK cell-mediated killing of MICA-expressing myeloma cells was found to be NKG2D-dependent [ 107 ]. Moreover, nectin-2 (CD112) and the poliovirus receptor (PVR, CD155), the two known DNAM-1 ligands, are heterogeneously expressed on malignant plasma cells. Indeed, a study including 12 MM patients revealed CD155 and/or CD112 expression on all but two samples [ 107 ]. Blocking DNAM-1 inhibited the in vitro killing of CD155-expressing myeloma cell lines [ 107 ]. Importantly, the role of DNAM-1 in controlling myeloma progression has been investigated in vivo, in Vk*myc transgenic mice that spontaneously develop MM [ 105 ]. In this study, DNAM-1+/+, DNAM-1+/− and DNAM-1−/− Vk*myc mice were monitored for disease development and survival over 800 days. Mice lacking DNAM-1 exhibited higher levels of serum monoclonal protein and succumbed earlier to MM. Although this work highlights the importance of DNAM-1 in MM immunosurveillance, the role of DNAM-1 for NK cell-mediated control of MM growth is still to be demonstrated because, akin to NKG2D, DNAM-1 is not a NK cell-specific receptor but is also expressed on T cells. Most studies have focused on the cytolytic activity of NK cells against MM cells. However, little is known about NK cell-derived IFN-γ in this context. Interestingly, IFN-γ-deficient mice injected with MM cell lines show shorter survival associated with higher tumor burden, when compared with WT mice [ 105 ]. IFN-γ not only stimulates innate and adaptive immune responses [ 112 ], but it also inhibits the in vitro proliferation of myeloma cells [ 113 ] and interferes with the RANKL signaling pathway to decrease osteoclastogenesis [ 114 ]. Thus, IFN-γ production by NK cells may significantly reduce MM pathology and this pathway would require further investigation. An early report described increased numbers of CD56+CD3− NK cells in the BM and blood of newly diagnosed myeloma patients [ 104 ]. Subsequent studies confirmed that patients with MGUS or active myeloma present elevated numbers of circulating NK cells [ 115 ]. Surprisingly, patients with higher numbers of NK cells at diagnosis were found to have worse prognoses [ 104 ]. In fact, increased NK cell numbers may be seen as an unsuccessful attempt of the immune system to control myeloma cell expansion. It is now well established that NK cell activity is largely compromised in MM patients since various mechanisms contribute to impair NK cell recognition and killing of myeloma cells. Immune escape of cancer cells involves two mechanisms: the immunoediting of tumor cells and the suppression of immune functions [ 116 ]. Both phenomena have been observed in MM. Interestingly, NK cell receptor ligands on myeloma cells are progressively edited during myeloma progression, outlining the role of NK cell control in the early stages of the disease and suggesting that impairment of NK cells responses may constitute a major event in promoting MGUS progression to MM. Indeed, malignant plasma cells or myeloma cell lines derived from early-stage MM/MGUS patients exhibit higher levels of MICA or Fas than plasma cells obtained from patients with active disease or cell lines derived from late-stage pleural effusions [ 108 , 117 , 118 ]. In addition, two studies observed a down-regulation of MHC class I molecules on the surface of plasma cells from early stages but not late stages MM patients [ 108 , 117 ]. Of note, these results contrast with a third study that reported opposite observations i.e. an up-regulation of MHC class I molecules on BM plasma cells from MGUS patients compared with healthy donors and MM patients [ 119 ]. Still, myeloma cell-lines established from the BM are sensitive to NK cell-mediated lysis whereas cell-lines generated from pleural effusions from the same donor are resistant [ 108 ]. Likewise, increased degranulation of NK cells in the BM of MM-bearing mice could only be observed at early time points of disease development [ 106 ]. MICA shedding from the surface of malignant plasma cells generates soluble MICA that may contribute to altered NKG2D expression and defective NK cell functions [ 118 ]. While decreased NKG2D expression on NK cells from MM patients has been confirmed by another study, the role of soluble MICA in this process has been questioned [ 120 ]. In addition to NKG2D, other activating receptors showed reduced expression on NK cells from active myeloma patients. These include DNAM-1, 2B4/CD244 as well as the low affinity Fc receptor CD16 [ 107 , 121 ]. Therefore, altered expression of activating receptors is likely to contribute to myeloma cell escape from cancer immunosurveillance. A recent study indicated that skewed chemokine levels hinder NK cell trafficking to the BM during the early asymptomatic stages of the disease [ 106 ]. This may represent another mechanism that contributes to myeloma cell escape from NK cell control. Compared with healthy donors, CD4/CD8 T cell ratios are decreased in the blood of MM patients [ 122 ]. Soluble factors present in the MM microenvironment (e.g. TGF-β, IL-10 and VEGF) along with defective antigen presentation by DCs may lead to deficient T cell responses in MM patients [ 123 , 124 ]. Though BM is a primary lymphoid organ, BM also functions as a secondary lymphoid organ where T cells responses are initiated [ 125 ]. In this context, efficient uptake and processing of circulating tumor-associated antigens by BM CD11c+ DCs is critical for the priming of T cell-mediated anti-tumor immune responses. Many studies concluded that DCs from MM patients have impaired T-cell stimulation capacities, whereas contradictory results exist regarding the frequency and phenotype of DCs [ 126 – 129 ]. Several soluble factors including IL-6, TGF-β and IL-10 seem to be involved in the impairment of DC functions [ 127 , 128 ]. Recently, Leone et al. reported that DCs accumulate in BM during the MGUS-to-MM progression. In this study, DCs purified from MGUS/MM patients were able to engulf apoptotic myeloma cells, cross-present them and activate tumor-specific CD8+ T cells whereas CD28–CD80/86 interaction between live myeloma cells and DCs down-regulated expression of proteasome subunits in myeloma cells [ 130 ]. This mechanism may enable myeloma cells to evade CD8+ T-cell killing in spite of efficient T cell priming. pDCs, the other major subset of DCs, are also involved in myeloma pathology. pDCs play pivotal roles for generation of normal plasma cells and antibody responses through secretion of type I IFN and IL-6 [ 131 ]. Chauhan et al. [ 132 ] showed that numbers and frequency of BM pDCs are increased in MM patients and that pDCs confer growth, survival, chemotaxis, and drug resistance in myeloma cells. Mouse models support an instrumental role of cytotoxic CD8 T cells in MM immunosurveillance [ 105 ]. Several pieces of evidence indicate that myeloma cells express tumor antigens able to trigger T cell responses. Analysis of the T cell receptor (TCR) variable gene repertoire revealed clonal expansions of CD8 T cells in MGUS and early stage MM patients that probably reflect chronic stimulations with myeloma-derived antigens [ 133 ]. Tumor-specific T cells able to lyse autologous myeloma cells can be generated from the blood or BM of myeloma patients using myeloma lysate-pulsed DCs [ 134 , 135 ]. Nonetheless, T cells freshly isolated from MM patients fail to recognize autologous tumor cells and to secrete IFN-γ, suggesting that they probably do not exert a strong anti-myeloma activity in vivo [ 134 ]. Conversely, freshly isolated T cells from the BM of MGUS patients produce IFN-γ when stimulated in vitro with DCs loaded with autologous tumor cells [ 136 ]. These data suggest that the anti-myeloma activity of tumor antigen-specific T cells is lost during the progression from MGUS to active myeloma. Noteworthy, T cells reactive against the embryonal stem cell-associated antigen SOX2 have been detected in MGUS but not MM patients [ 137 ]. SOX2 was reported to be expressed in a progenitor fraction of myeloma cells and anti-SOX2 T cell immunity correlates with a favorable outcome. In addition, patients that survived more than 10 years present expanded cytolytic T cell clones that, unlike the majority of MM patients, respond to stimulation by proliferating and producing IFN-γ [ 138 ]. Interestingly, T cells isolated from MGUS or MM patients are activated by DCs loaded with autologous but not allogeneic tumor lysates [ 134 – 136 ]. This indicates that T cell responses against MM are specific of each myeloma clone and differ from one patient to another. The antigenic properties of the variable region of the secreted monoclonal protein (idiotope) have been extensively studied [ 139 ]. Unfortunately, idiotype-specific responses are usually hinder by several tolerance mechanisms, including the deletion of high avidity idiotype-specific T cells [ 140 ]. In addition to idiotopes, general tumor antigens are shared among MM cells from different patients. Those include NY-ESO-1, MAGE-A3, Muc-1, sperm protein 17, PRDI-BF1 and XBP-1 and CD138 [ 139 ]. Adoptive transfer of T cells engineered to express a high affinity TCR for a myeloma-specific antigen represents an attractive therapy. As a example, the infusion of NY-ESO-1-specific engineered T cells recently showed promising results in a phase I/II clinical trial [ 141 ]. Helper T cells play pivotal roles in adaptive immune responses and imbalanced polarization of CD4 T cell responses could largely impact on MM growth. Several reports describe a deregulated cytokine network in MM but not all of them agree on the nature of the changes. An early study established that T cells from MGUS patients stimulated with autologous monoclonal IgG are more efficient producers of IL-2 and IFN-γ when compared with idiotype-reactive T cells from late MM patients [ 142 ]. In the same line, increased IL-4 production by T cells from MM patients indicated that a shift toward Th2 polarization emerges with disease progression. This hypothesis is supported by another study describing decreased levels of IFN-γ and increased levels of IL-10 and IL-4 in the serum of 62 myeloma patients compared with 50 healthy donors [ 143 ]. IL-6 production by T cells may contribute to decreased Th1 responses in MM patients [ 144 ]. However, elevated Th1/Th2 ratios in the blood of MM patients in initial diagnosis and refractory phase have also been reported [ 145 , 146 ] and high percentages of IFN-γ producing T cells were observed in MM patients [ 147 ]. Further work is needed to determine whether these discrepancies could account for variations in Th1/Th2 polarization during the course of the disease or may be explained by differences between BM and peripheral blood. Likewise, the role of Treg responses in MM is still unclear. An initial study demonstrated decreased FoxP3-expressing Tregs in spite of elevated percentages of CD25+CD4+ cells in the blood of MGUS and MM patients [ 148 ]. This group suggested that CD25+ T cells from MGUS or MM patients failed to suppress the proliferation of PBMC stimulated with anti-CD3 and concluded that MM Tregs were dysfunctional. However, one caveat in this assay is the intrinsic defect in the proliferation of PBMC isolated from MM patients. It was later established that CD4+CD25hi Tregs from MM patients are as efficient as Tregs from healthy donors at suppressing allogeneic T cell proliferation [ 149 ]. Results are still conflicting regarding Treg frequencies that are alternatively described as increased [ 138 , 149 , 150 ] or reduced [ 147 , 148 ] in MM patients. Interestingly, patients with high peripheral Treg frequencies show reduced survival [ 150 ]. Th17 cells are a pro-inflammatory subset of CD4 T cells that produces IL-17 and IL-22. IL-6 plays a pivotal role in dictating the balance between Tregs and Th17 cells [ 151 ]. Treg/Th17 ratios were reportedly increased in MM patients, albeit lower in patients with long-term survival [ 138 ]. Yet, increased proportions of IL-17-producing CD4 T cells and increased serum concentrations of the Th17-associated cytokines IL-1β, IL-6, IL-17, IL-21, IL-22 and IL-23 have been observed in MM patients, when compared with healthy controls [ 48 , 147 , 152 ]. IL-17 might contribute to MM pathology as it induces the proliferation of MM cell lines in vitro [ 48 ] and promotes MM-associated bone lesions [ 53 ]. Finally, increased frequency of IL-22 and IL-13 double-producing T cells has been detected in the blood and BM of relapsed and late stage MM patients [ 153 ]. These Th22 cells are likely to sustain MM pathology since IL-22 favors the proliferation and resistance to drug-induced cell death of some MM cell lines and IL-13 indirectly promotes MM cell survival through the activation of BMSCs. NKT cells are characterized by the expression of both T cell and NK cell receptors. NKT cells recognize glycolipid antigens presented by the MHC-class I-like molecule CD1d and exert strong anti-tumor responses through direct cytotoxicity or release of pro-inflammatory cytokines, including IFN-γ [ 154 ]. Despite its absence on MM cell lines, CD1d is expressed by primary myeloma cells [ 155 ]. Lisophosphatidylcholine has been identified as a NKT cell ligand expressed on plasma cells from MM patients [ 156 ]. Interestingly, frequencies of lisophosphatidylcholine-recognizing NKT cells are dramatically increased in MM patients. Lisophosphatidylcholine stimulates IL-13 production by NKT and thus probably favors angiogenesis and tumor-promoting inflammation. Actually, NKT cells from MM patients are dysfunctional and unable to produce IFN-γ when stimulated with the glycolipid α-galactosylceramide [ 157 ]. Of note, similarly to conventional T cells, NKT cells isolated from MM patients can be rescued in vitro; and APC-stimulated NKT cells efficiently lyse primary autologous myeloma targets as well as CD1d-transfected MM cell lines [ 155 , 157 ]. MR1-restricted mucosal associated invariant T (MAIT) cells are another type of invariant T cells that, similarly to NKT cells, have simplified patterns of TCR expression and respond immediately to antigen stimulation [ 158 ]. Albeit abundant in humans (5 % of total blood T cells), MAIT cells have not been investigated in the context of MM. Immunosuppression is an important characteristic of MM pathology [ 159 ]. Reversing this immunesuppression could potentially restore myeloma immunosurveillance and improve disease control (Fig. 2). Although MM remains an incurable malignancy, the introduction of autologous stem cell transplantation (SCT) following myeloablative treatment contributed significantly to the improved survival of MM patients observed in the last 15 years [ 160 ]. By introducing a new immune system and facilitating homeostatic lymphocytic proliferation in the setting of minimal residual disease, autologous SCT may overcome the acquired immune defects induced by myeloma. Absolute lymphocyte count recovery post-autologous SCT constitutes an independent prognostic factor for transplanted MM patients [ 161 ]. Intriguingly, Wolniak et al. [ 162 ] described a clonal population of CD8+CD57+ large granular lymphocytes in the BM and blood of MM patients post-autologous SCT. Although the specificity of these cells remains to be established, they might recognize tumor antigens and potentially drive graft-versus-myeloma (GvM) responses. Unfortunately, the GvM induced by autologous SCT, if it exists, is generally weak and most patients relapse. The transfer of marrow-infiltrating lymphocytes enriched in myeloma-specific T cells may enhance GvM effect [ 163 ]. An alternative is allogeneic SCT, which has the advantage of providing recipients with new T cell repertoire and triggers potent GvM effects against multiple minor histocomaptibility antigens [ 164 ]. The infusion of primed lymphocytes collected after donor immunization with a tumor-specific antigen may further enhance the GvM effect following SCT [ 165 ]. An interesting study used a pre-clinical humanized mouse model of MM to demonstrate the therapeutic potential of allogeneic T cell infusions [ 166 ]. Immunodeficient mice were used as a recipient for human MM cell lines and transferred or not with naïve allogeneic human T cells. In this model, a nonconventional population of double positive CD4+CD8+ T cells was induced in MM-bearing mice. These myeloma-induced alloreactive T cells produced IFN-γ and perforin and may mediate GvM responses. Nonetheless, because allogeneic SCT is associated with a high transplant-related mortality in the setting of myeloma, its place as a therapeutic strategy in this setting remains investigational [ 167 ]. Thalidomide is a glutamic acid derivative proved to be highly effective for the treatment of advanced MM patients [ 168 ]. The anti-angiogenic properties of thalidomide [ 169 ] together with its direct effect on MM cells [ 170 ] and its potent anti-inflammatory capacities [ 171 ] contribute to its anti-myeloma activity. Structural analogs of thalidomide have been selected based on their ability to inhibit TNF-α production. Among them, a class of compounds was found to significantly inhibit pro-inflammatory cytokine production by lipopolysaccharide (LPS)-stimulated PBMCs while increasing T cell responses to anti-CD3 stimulation [ 171 ]. These thalidomide analogs with unique immune regulatory properties are called immunomodulary drugs (IMiDs). Besides their direct anti-tumor effect [ 170 ], the real feature of IMiDs is their ability to promote host immunity while abrogating the protection conferred by the BM microenvironment [ 172 ]. Lenalidomide (Revlimid, CC-5013) and pomalidomide (Actimid, CC-4047) are the two most studied IMiDs. Both of them increase the cytotoxic activity of T cells [ 173 , 174 ] and NK cells [ 175 ] against MM cells. In addition, lenalidomide further potentiates IFN-γ production by anti-CD3/anti-CD28 stimulated CD8 T cells from MM patients [ 174 ] and NK cells cultured for several days in the presence of lenalidomide produce higher levels of TNF-α and IFN-γ when stimulated with MM cell lines [ 176 ]. The lenalidomide-mediated up-regulation of TRAIL expression on NK cells could partially explain the enhanced cytotoxicity [ 176 ]. By contrast, pomalidomide-mediated enhancement of NK cell activity requires the presence of other cell populations [ 177 ]. IMiDs were found to stimulate IL-2 production by T cells and thereby indirectly trigger NK cell functions [ 178 ]. Furthermore, the ability of lenalidomide to boost NKT cell responses [ 155 , 179 ] raises the possibility of its combination with NKT cell targeting approaches. Finally, lenalidomide may be particularly active in patients who have relapsed following allogeneic SCT [ 180 – 182 ]. These data are consistent with the immune stimulatory capacities of lenalidomide that likely boost endogenous GvM effects. In fact, an increase in activated T and NK cells has been observed in the blood of lenalidomide treated patients [ 182 ]. Surprisingly, two studies reported increased circulating Tregs during lenalidomide treatment of allogeneic SCT patients [ 180 , 182 ]. However, these observations were made on very small cohorts comprising less than ten patients. Further studies should not only confirm the effect of IMiDs on immune cell frequencies/activation in the blood but also investigate how these new agents modulate immune responses in the BM of transplanted or non-transplanted MM patients. The ubiquitin–proteasome pathway carries out protein turnover and its disruption induces the apoptosis of some cancer cells, including MM cells [ 183 ]. Bortezomib (Velcade), a proteasome inhibitor, has proven efficacy in MM [ 184 ]. In addition to sensitizing tumor cells to apotosis, bortezomib modulates host immune responses [ 185 ]. In vitro incubation of MM cells with bortezomib decreases their expression of MHC class I molecules while augmenting the display of activating NK cell receptor ligands [ 110 , 186 ]. Such changes may facilitate NK cell recognition and killing of MM cells. Moreover, bortezomib induces immunogenic cell death of MM cells, thereby facilitating the DC-mediated elicitation of anti-myeloma T cell responses [ 187 ]. However, several reports indicated an immunosuppressive effect of bortezomib. Lymphopenia has been observed in about 10 % of bortezomib-treated patients [ 184 ] and is consistent with the in vitro toxicity of this drug toward lymphocytes [ 188 ]. Additional in vitro studies established the ability of bortezomib to inhibit DC [ 189 ] and NK cell [ 190 ] functions. Of note, bortezomib has been used at high concentrations in most experiments supporting an immunosuppressive effect [ 185 ]. Such effects may not occur in vivo, where immune cell exposure to this drug may be lower. Notably, bortezomib augments the anti-tumor effect of autologous NK cell infusions in mice [ 191 ]. Besides, the T and NK cell activating receptor DNAM-1 was found necessary for the therapeutic efficacy of bortezomib in MM-bearing mice [ 105 ]. Interestingly, early bortezomib treatment following allogeneic SCT protects mice from acute GvHD [ 192 ]. However, caution should be taken when combining bortezomib with allogeneic SCT because two subsequent studies in mice established that delayed bortezomib treatment post-transfer exacerbates GvHD-dependent mortality [ 193 , 194 ]. While the mechanisms behind these observations remain unclear, bortezomib may differently regulate the distinct phases of GvHD. Other proteasome inhibitors have generated interest for the treatment of MM. Carfilzomib is a second-generation proteasome inhibitor that demonstrated anti-myeloma efficacy in clinical studies [ 195 ]. Several therapeutic agents exert their anti-MM efficacy at least in part through the recovery or the augmentation of NK cell responses [ 28 ]. As previously mentioned, new drugs such as thalidomide, IMiDs and proteasome inhibitors potentiate NK cell-mediated killing of MM cells. Furthermore, compared with T and B cells, NK cells reconstitute early following autologous SCT and may contribute to the success of this therapy [ 196 ]. NK cells are also important mediators of the GvM effects after T cell depleted allogeneic SCT, especially in the case of KIR-ligand incompatibility i.e. when donor NK cells do not express inhibitory KIRs recognizing host MHC class I molecules. Reduced relapse rates have been observed in MM patients receiving KIR-ligand mismatched allogeneic transplants [ 197 ]; and infusion of KIR-ligand mismatched NK cells followed by autologous SCT achieved 50 % of near complete remission in advanced MM patients [ 198 ]. Furthermore, IPH2101 (1-7F9), an anti-inhibitory KIR antibody could restore NK cell responses in relapsed/refractory MM patients [ 199 ]. Phase I clinical trials indicate that IPH2101 is well tolerated when given as a single agent or in combination with lenalidomide, but its efficacy against MM has yet to be proven [ 199 , 200 ]. Alternatively, reprogramming NK cells with chimeric antigen receptors (CARs) specific for MM antigen could increase their reactivity toward myeloma cells. Myeloma cells express high levels of CD138 and a pre-clinical study demonstrated the ability of CD138-specific CAR NK cells to markedly delay MM growth and prolong survival in a xenograft model [ 201 ]. Finally, monoclonal antibody (mAb) therapy targeting myeloma cells have demonstrated clinical efficacy when combined with bortezomib or lenamidomide [ 202 ]. CD16 on NK cells binds to the constant region of Ig and certainly plays a key role in mAb therapies by triggering antibody-dependent cellular cytotoxicity (ADCC) of mAb-coated tumor cells. Among the different mAbs tested, daratumumab targets CD38, an ectoenzyme commonly used as a marker of myeloma cells. Daratumumab administered as a single agent has demonstrated anti-myeloma activity in clinical trials [ 203 ]. Elotuzumab is another successful mAb [ 204 ]. Elotuzumab recognizes CS1 (SLAMF7), a glycoprotein universally expressed on MM cells. Elotuzumab activity appears to be dependent on NK cell-mediated ADCC [ 205 ]. Interestingly, NK cells express low levels of CS1. The binding of elotuzumab to CS1 directly promotes NK cell activity and thus contributes to enhanced anti-tumor effects [ 206 ]. Immune checkpoints such as programmed-death 1 (PD-1) and cytotoxic T-lymphocyte associated protein 4 (CTLA-4) down-regulate T cell responses and thereby maintain self-tolerance. The use of mAbs to disrupt the receptor-ligand interactions involved in these pathways has shown remarkable results in melanoma [ 207 ]. Immune checkpoint modulation also holds promise for the treatment of hematological malignancies [ 208 ]. The high levels of PD-1 observed on NK and T cells from MM patients together with the expression of PD-1 ligand (PD-L1) on MM cells [ 209 , 210 ] strongly encouraged the investigation of PD-1 blockade in MM patients. Yet, a phase 1 clinical trial using anti-PD1 mAb nivolumab in MM patients yielded disappointing initial results, with no objective responses [ 208 ]. Noteworthy, disease remained stable in 18 of 27 patients, indicating that PD-1 blockade might still have an effect in MM and could be efficient in combination with other therapeutics. In vitro studies suggested that PD-1 blockade by pidilizumab (CT-011) would synergically combine with lenalidomide [ 210 ] or with a DC/Myeloma fusion vaccine [ 209 ]. Consequently, a phase 2 clinical trial is currently ongoing to assess the efficacy of a DC/tumor vaccine in conjunction with pidilizumab following autologous SCT (NCT011067287) [ 164 ]. Additionally, a preclinical study indicated that blocking PD-L1 together with other immune checkpoints (CTLA-4, LAG-3 or TIM-3) promotes the survival of MM-bearing mice following low dose total body irradiation [ 211 ]. Investigation of immune checkpoint inhibitors is currently booming and should eventually lead to the advent of efficient combinations strategies in MM. An alternative to immune checkpoint blockade is the use of agonist mAbs directed again co-stimulatory molecules. Approaches using anti-CD137 mAbs has been shown to elicit potent T and NK cell-mediated responses in murine MM models [ 105 , 212 ]. Clinical trials are presently ongoing to evaluate the safety and beneficial effects of two agonist anti-CD137 mAbs in cancer patients [ 213 ]. Additional strategies able to promote the immune-mediated elimination of myeloma cells include DC-based therapies and vaccines as well as T cell infusions [ 164 ]. Vaccination can be directed against a specific tumor antigen such as the idiotype protein but can also use full tumor lysates, apoptotic bodies or fusion between DCs and MM cells [ 214 ]. This second option allows the development of T cell responses directed toward the whole spectrum of tumor antigens and thus prevents a possible escape caused by the down-regulation of a single targeted antigen. DC-based therapies aim to foster the expansion of tumor-specific lymphocytes in vivo; an alternative strategy is the infusion of ex vivo expanded T cells. Similarly to NK cells, T cells can be engineered to express CARs, thereby allowing the specific targeting of myeloma cells [ 215 ]. Early studies established the role of the BM microenvironment in MM pathology but the immune component of this microenvironment has not received full attention until recently. MM appears to represent a good disease model of the cancer editing process [ 216 ] where a premalignant equilibrium phase (MGUS) and an escape phase (active MM) can be observed. Several lines of evidence indicate that changes in immune responses may drive MGUS to MM progression [ 117 , 119 , 136 ]. Therapeutic options such as autologous SCT, thalidomide, IMiDs and proteasome inhibitors have the ability to restore and enhance anti-myeloma immune responses, properties that have likely contributed to their clinical success. Still, MM remains largely incurable and most patients succumb to relapsed disease. Research is currently ongoing to design new therapeutic strategies able to eradicate residual disease and to prevent relapse. In this regard, harnessing the immune system is an appealing solution and new approaches such as NK cell-based therapies or immune checkpoint modulation hold great promises for MM patients. K.N. is supported by The Naito Foundation. M. J. S. is supported by a NH&MRC Australia Fellowship (628623) and Program Grant (1013667). C.G. is supported by a NH&MRC early career fellowship (1107417).
2019-04-19T06:30:02Z
https://oncology.medicinematters.com/treatment/immunomodulatory-agents/immune-responses-in-multiple-myeloma-role-of-the-natural-immune-/12031716
On 1 April 2014 the Bishop Hill blog carried a guest post ‘Dating error’ by Doug Keenan, in which he set out his allegations of research misconduct by Oxford University professor Christopher Bronk Ramsey. Professor Bronk Ramsey is an expert on calibration of radiocarbon dating and author of OxCal, apparently one of the two most widely used radiocarbon calibration programs (the other being Calib, by Stuiver and Reimer). Steve McIntyre and others opined that an allegation of misconduct was inappropriate in this sort of case, and likely to be counter-productive. I entirely agree. Nevertheless, the post prompted an interesting discussion with statistical expert Professor Radford Neal of Toronto University and with Nullius in Verba (an anonymous but statistically-minded commentator). They took issue with Doug’s claims that the statistical methods and resulting probability densities (PDFs) and probability ranges given by OxCal and Calib are wrong. Doug’s arguments, using a partly Bayesian approach he calls a discrete calibration method, are set out in his 2012 peer reviewed paper. I also commented, saying if one assumes a uniform prior for the true calendar date, then Doug Keenan’s results do not follow from standard Bayesian theory. Although the OxCal and Calib calibration graphs (and the Calib manual) are confusing on the point, Bronk Ramsey’s papers make clear he does use such a uniform prior. I wrote that in my view Bronk Ramsey had followed a defensible approach (since his results flow from applying Bayes’ theorem using that prior), so there was no research misconduct involved, but that his method did not represent best scientific inference. The final outcome was that Doug accepted what Radford and Nullius said about how the sample measurement should be interpreted as probability, with the implication that his criticism of the calibration method is invalid. However, as I had told Doug originally, I think his criticism of the OxCal and Calib calibration methods is actually valid: I just think that imperfect understanding rather than misconduct on the part of Bronk Ramsey (and of other radiocarbon calibration experts) is involved. Progress in probability and statistics has for a long time been impeded by quasi-philosophical disagreements between theoreticians as to what probability represents and the correct foundations for statistics. Use of what are, in my view, unsatisfactory methods remains common. Fortunately, regardless of foundational disagreements I think most people (and certainly most scientists) are in practice prepared to judge the appropriateness of statistical estimation methods by how well they perform upon repeated use. In other words, when estimating the value of a fixed but unknown parameter, does the true value lie outside the specified uncertainty range in the indicated proportion of cases? This so-called frequentist coverage or probability-matching property can be tested by drawing samples at random from the relevant uncertainty distributions. For any assumed distribution of parameter values, a method of producing 5–95% uncertainty ranges can be tested by drawing a large number of samples of possible parameter values from that distribution, and for each drawing a measurement at random according to the measurement uncertainty distribution and estimating a range for the parameter. If the true value of the parameter lies below the bottom end of the range in 5% of cases and above its top in 5% of cases, then that method can be said to exhibit perfect frequentist coverage or exact probability matching (at least at the 5% and 95% probability levels), and would be viewed as a more reliable method than a non-probability-matching one for which those percentages were (say) 3% and 10%. It is also preferable to a method for which those percentages were both 3%, which would imply the uncertainty ranges were unnecessarily wide. Note that in some cases probability-matching accuracy is unaffected by the parameter value distribution assumed. I’ll now attempt to explain the statistical issues and to provide evidence for my views. I’ll then set up a simplified, analytically tractable, version of the problem and use it to test the probability matching performance of different methods. I’ll leave discussion of the merits of Doug’s methods to the end. The key point is that OxCal and Calib use a subjective Bayesian method with a wide uniform prior on the parameter being estimated, here calendar age, whilst the observational data provides information about a variable, radiocarbon or 14C age, that has a nonlinear relationship to the parameter of interest. The vast bulk of the uncertainty relates to 14C age – principally measurement and similar errors, but also calibration uncertainty. The situation is thus very similar to that for estimation of climate sensitivity. It seems to me that the OxCal and Calib methods are conceptually wrong, just as use of a uniform prior for estimating climate sensitivity is normally inappropriate. In the case of climate sensitivity, I have been arguing for a long time that Bayesian methods are only appropriate if one takes an objective approach, using a noninformative prior, rather than a subjective approach (using, typically, a uniform or expert prior). Unfortunately, many statisticians (and all but a few climate scientists) seem not to understand, or at least not to accept, the arguments in favour of an objective Bayesian approach. Most climate sensitivity studies still use subjective Bayesian methods. Objective Bayesian methods require a noninformative prior. That is, a prior that influences parameter estimation as little as possible: it lets the data ‘speak for themselves’[i]. Bayesian methods generally cannot achieve exact probability matching even with the most noninformative prior, but objective Bayesian methods can often achieve approximate probability matching. In simple cases a uniform prior is quite often noninformative, so that a subjective Bayesian approach that involved using a uniform prior would involve the same calculations and give the same results as an objective Bayesian approach. An example is where the parameter being estimated is linearly related to data, the uncertainties in which represent measurement errors with a fixed distribution. However, where nonlinear relationships are involved a noninformative prior for the parameter is rarely uniform. In complex cases deriving a suitable noninformative prior can be difficult, and in many cases it is impossible to find a prior that has no influence at all on parameter estimation. Fortunately, in one-dimensional cases where uncertainty involves measurement and similar errors it is often possible to find a completely noninformative prior, with the result that exact probability matching can be achieved. In such cases, the so-called ‘Jeffreys’ prior’ is generally the correct choice, and can be calculated by applying standard formulae. In essence, Jeffreys’ prior can be thought of as a conversion factor between distances in parameter space and distances in data space. Where a data–parameter relationship is linear and the data error distribution is independent of the parameter value, that conversion factor will be fixed, leading to Jeffreys’ prior being uniform. But where a data–parameter relationship is nonlinear and/or the data precision is variable, Jeffreys’ prior achieves noninformativeness by being appropriately non-uniform. Turning to the specifics of radiocarbon dating, my understanding is as follows. The 14C age uncertainty varies with 14C age, and is lognormal rather than normal (Gaussian). However, the variation in uncertainty is sufficiently slow for the error distribution applying to any particular sample to be taken as Gaussian with a standard deviation that is constant over the width of the distribution, provided the measurement is not close to the background radiation level. It follows that, were one simply estimating the ‘true’ radiocarbon age of the sample, a uniform-in-14C-age prior would be noninformative. Use of such a prior would result in an objective Bayesian estimated posterior PDF for the true 14C age that was Gaussian in form. However, the key point about radiocarbon dating is that the ‘calibration curve’ relationship of ‘true’ radiocarbon age t14C to the true calendar date ti of the event corresponding to the 14C determination is highly nonlinear. (I will consider only a single event, so i = 1.) It follows that to be noninformative a prior for ti must be non-uniform. Assuming that the desire is to produce uncertainty ranges beyond which – upon repeated use – the true calendar date will fall in a specified proportion of cases, the fact that in reality there may be an equal chance of tilying in any calendar year is irrelevant. Defensible though it is in terms of subjective Bayesian theory, a uniform prior in titranslates into a highly non-uniform prior for the ‘true’ radiocarbon age (t14C) as inferred from the 14C determination. Applying Bayes’ theorem in the usual way, the posterior density for t14C will then be non-Gaussian. with the variation of the standard deviation σd with t14C usually being ignored for individual samples. Figure 1, from Fig. 1 in Doug’s paper, shows an example of an OxCal calibration, with the resulting 95.4% (±2 sigma for a Gaussian distribution) probability range marked by the thin bar above the x-axis. The red curve on the y-axis is centred on the 14C age derived by measurement (the radiocarbon or 14C determination) and shows the likelihood for that determination as a function of true 14C age. The likelihood for a 14C determination is the relative probability, for any given true 14C age, of having obtained that determination given the uncertainty in 14C determinations. The blue calibration curve shows the relationship between true 14C age (on the y-axis) and true calendar age on the x-axis. Its vertical width represents calibration uncertainty. The estimated PDF for calendar age is shown in grey. Ignoring the small effect of the calibration uncertainty, the PDF simply expresses the 14C determination’s likelihood as a function of calendar age. It represents both the likelihood function for the determination and – since a uniform prior for calendar age is used – the posterior PDF for the true calendar age (Bayes’ theorem giving the posterior as the normalised product of the prior and the likelihood function). By contrast to OxCal’s subjective Bayesian, uniform prior based method, an objective Bayesian approach would involve computing a noninformative prior for ti. The standard choice would normally be Jeffreys’ prior. Doing so is somewhat problematic here in view of the calibration curve not being monotonic – it contains reversals – and also having varying uncertainty. On that basis, and ignoring also the calibration curve being limited in range, it follows that Jeffreys’ prior for ti would equal the absolute derivative (slope) of calibrated 14C age with respect to calendar date. Moreover, in the absence of non-monotonicity it is known that in a case like this the Jeffreys’ prior is completely noninformative. Jeffreys’ prior would in fact provide exact probability matching – perfect agreement between the objective Bayesian posterior cumulative distribution functions (CDFs – the integrals of PDFs) and the results of repeated testing. The reason for the form here of Jeffreys’ prior is fairly clear – where the calibration curve is steep and hence its derivative with respect to calendar age is large, the error probability (red shaded area) between two nearby values of t14C corresponds to a much smaller ti range than when the derivative is small. An alternative way of seeing that a noninformative prior for calendar age should be proportional to the derivative of the calibration curve is as follows. One can perform the Bayesian inference step to derive a posterior PDF for the true 14C age, t14C, using a uniform prior for 14C age – which as stated previously is, given the assumed Gaussian error distribution, noninformative. That results in a posterior PDF for 14C age that is identical, up to proportionality, to its likelihood function. Then one can carry out a change of variable from t14C to ti. The standard (Jacobian determinant) formula for converting a PDF between two variables, where one is a function of the other, involves multiplying the PDF, expressed in terms of the new variable, by the absolute derivative of the inverse transformation – the derivative of t14C with respect to ti. Taking this route, the objective posterior PDF for calendar age is the normalised product of the 14C age likelihood function (since the 14C objective Bayesian posterior is proportional to its likelihood function), expressed in terms of calendar age, multiplied by the derivative of t14C with respect to ti. That is identical, as it should be, to the result of direct objective Bayesian inference of calendar age using the Jeffreys’ prior. In order to make the problem analytically tractable and the performance of different methods – in terms of probability matching – easily testable, I have created a stylised calibration curve. It consists of the sum of three scaled shifted sigmoid functions. The curve exhibits both plateaus and steep regions whilst being smooth and monotonic and having a simple derivative. Figure 2 shows similar information to Figure 1 but with my stylised calibration curve instead of a real one. The grey wings of the curve represent a fixed calibration curve error, which, as discussed, I absorb into the 14C determination error. The pink curve, showing the Bayesian posterior PDF using a uniform prior in calendar age, corresponds to the grey curve in Figure 1. It is highest over the right hand plateau, which corresponds to the centre of the red radiocarbon age error distribution, but has a non-negligible value over the left hand plateau as well. The figure also shows the objective Jeffreys’ prior (dotted green line), which reflects the derivative of the calibration curve. The objective Bayesian posterior using that prior is shown as the solid green line. As can be seen, it is very different from the uniform-calendar-year-prior based posterior that would be produced by the OxCal or Calib programs for this 14C determination (if they used this calibration curve). The Jeffreys’ prior (dotted green line) has bumps wherever the calibration curve has a high slope, and is very low in plateau regions. Subjective Bayesians will probably throw up their hands in horror at it, since it would be unphysical to think that the probability of a sample having any particular calendar age depended on the shape of the calibration curve. But that is to mistake the nature of a noninformative prior, here Jeffreys’ prior. A noninformative prior has no direct probabilistic interpretation. As a standard textbook (Bernardo and Smith, 1994) puts it in relation to reference analysis, arguably the most successful approach to objective Bayesian inference: “The positive functions π(θ) [the noninformative reference priors] are merely pragmatically convenient tools for the derivation of reference posterior distributions via Bayes’ theorem”. Rather than representing a probabilistic description of existing evidence as to a probability distribution for the parameter being estimated, a noninformative prior primarily reflects (at least in straightforward cases) how informative, at differing values of the parameter, the data is expected to be about the parameter. That in turn reflects how precise the data are in the relevant region and how fast expected data values change with the parameter value. This comes back to the relationship between distances in parameter space and distances in data space that I mentioned earlier. It may be thought that the objective posterior PDF has an artificial shape, with peaks and low regions determined, via the prior, by the vagaries of the calibration curve and not by genuine information as to the true calendar age of the sample. But one shouldn’t pay too much attention to PDF shapes; they can be misleading. What is most important in my view is the calendar age ranges the PDF provides, which for one-sided ranges follow directly from percentage points of the posterior CDF. By a one-sided x% range I mean the range from the lowest possible value of the parameter (here, zero) to the value, y, at which the range is stated to contain x% of the posterior probability. An x1–x2% range or interval for the parameter is then y1 − y2, where y1 and y2 are the (tops of the) one-sided x1% and x2% ranges. Technically, this is a credible interval, as it relates to Bayesian posterior probability. By contrast, a (frequentist) x% one-sided confidence interval with a limit of y can, if accurate, be thought of as one calculated to result in values of y such that, upon indefinitely repeated random sampling from the uncertainty distributions involved, the true parameter value will lie below y in x% of cases. By definition, an accurate confidence interval exhibits perfect frequentist coverage and so represents, for an x% interval, exact probability matching. If one-sided Bayesian credible intervals derived using a particular prior pass that test then they and the prior used are said to be probability matching. In general, Bayesian posteriors cannot be perfectly probability matching. But the simplified case presented here falls within an exception to that rule, and use of Jeffreys’ prior should in principle lead to exact probability matching. The two posterior PDFs in Figure 2 imply very different calendar age uncertainty ranges. As OxCal reports a 95.4% range, I’ll start with the 95.4% ranges lying between the 2.3% and 97.7% points of each posterior CDF. Using a uniform prior, that range is 365–1567 years. Using Jeffreys’ prior, the objective Bayesian 2.3–97.7% range is 320–1636 years – somewhat wider. But for a 5–95% range, the difference is large: 395–1472 years using a uniform prior versus 333–1043 years using Jeffreys’ prior. Note that OxCal would report a 95.4% highest posterior density (HPD) range rather than a range lying between the 2.3% and 97.7% points the posterior CDF. A 95.4% HPD range is one spanning the region with the highest posterior densities that includes 0.954 probability in total; it is necessarily the narrowest such range. HPD ranges are located differently from those with equal probability in both tails of a probability distribution; they are narrower but not necessarily better. What about confidence intervals, a non-Bayesian statistician would rightly ask? The obvious way of obtaining confidence intervals is to use likelihood-based inference, specifically the signed root log-likelihood ratio (SRLR). In general, the SRLR only provides approximate confidence intervals. But where, as here, the parameter involved is a monotonic transform of a variable with a Gaussian distribution, SRLR confidence intervals are exact. So what are the 2.3–97.7% and 5–95% SRLR-derived confidence intervals? They are respectively 320–1636 years and 333–1043 years – identical to the objective Bayesian ranges using Jeffreys’ prior, but quite different from those using a uniform prior. I would argue that the coincidence of the Jeffreys’ prior derived objective Bayesian credible intervals and the SRLR confidence intervals reflects the fact that here both methods provide exact probability matching. Whilst an example is illuminating, in order properly to compare the performance of the different methods one needs to carry out repeated testing of probability matching based on a large number of samples: frequentist coverage testing. Although some Bayesians reject such testing, most people (including most statisticians) want a statistical inference method to produce, over the long run, results that accord with relative frequencies of outcomes from repeated tests involving random draws from the relevant probability distributions. By drawing samples from the same uniform calendar age distribution on which Bronk Ramsey’s method is predicated, we can test how well each method meets that aim. This is a standard way of testing statistical inference methods. Clearly, one wants a method also to produce accurate results for samples that – unbeknownst to the experimenter – are drawn from individual regions of the age range, and not just for samples that have an equal probability of having come from any year throughout the entire range. I have accordingly carried out frequentist coverage testing, using 10,000 samples drawn at random uniformly from both the full extent of my calibration curve and from various sub-regions of it. For each sampled true calendar age, a 14C determination age is sampled randomly from a Gaussian error distribution. I’ve assumed an error standard deviation of 30 14C years, to include calibration curve uncertainty as well as that in the 14C determination. Whilst in principle I should have used somewhat different illustrative standard deviations for different regions, doing so would not affect the qualitative findings. In these frequentist coverage tests, for each integral percentage point of probability the proportion of cases where the true calendar age of the sample falls below the upper limit given by the method involved for a one-sided interval extending to that percentage point is computed. The resulting proportions are then plotted against the percentage points they relate to. Perfect probability matching will result in a straight line going from (0%, 0) to (100%,1). I test both subjective and objective Bayesian methods, using for calendar age respectively a uniform prior and Jeffreys’ prior. I also test the signed root log-likelihood ratio method. For the Bayesian method using a uniform prior, I also test the coverage of the HPD regions that OxCal reports. As HPD regions are two-sided, I compute the proportion of cases in which the true calendar age falls within the calculated HPD region for each integral percentage HPD region. Since usually only ranges that contain a majority of the estimated posterior probability are of interest, only the right hand half of the HPD curves (HPD ranges exceeding 50%) is of practical significance. Note that the title and y-axis label in the frequentist coverage test figures refer to one-sided regions and should in relation to HPD regions be interpreted in accordance with the foregoing explanation. I’ll start with the entire range, except that I don’t sample from the 100 years at each end of the calibration curve. That is because otherwise a significant proportion of samples result in non-negligible likelihood falling outside the limits of the calibration curve. Figure 3 accordingly shows probability matching with true calendar ages drawn uniformly from years 100–1900. The results are shown for four methods. The first two are subjective Bayesian using a uniform prior as per Bronk Ramsey – from percentage points of the posterior CDF and from highest posterior density regions. The third is objective Bayesian employing Jeffreys’ prior, from percentage points of the posterior CDF. The fourth uses the non-Bayesian signed root log-likelihood ratio (SRLR) method. In this case, all four methods give good probability matching – their curves lie very close to the dotted black straight line that represents perfect matching. Now let’s look at sub-periods of the full 100–1900 year period. I’ve picked periods representing both ranges over which the calibration curve is mainly flattish and those where it is mainly steep. I start with years 100–500, over most of which the calibration curve is steep. The results are shown in Figure 4. Over this period, SRLR gives essentially perfect matching, while the Bayesian methods give mixed results. Jeffreys’ prior gives very good matching – not quite perfect, probably because for some samples there is non-negligible likelihood at year zero. However, posterior CDF points using a uniform prior don’t provide very good matching, particularly for small values of the CDF (corresponding to the lower bound of two-sided uncertainty ranges). Posterior HPD regions provide rather better, but still noticeably imperfect, matching. Figure 5 shows results for the 500–1000 range, which is flat except near 1000 years. The conclusions are much as for 100–500 years save that Jeffreys’ prior now gives perfect matching and that mismatching from posterior CDF points resulting from a uniform prior give smaller errors (and in the opposite direction) than for 100–500 years. Now we’ll take the 1000–1100 years range, which asymmetrically covers a steep region in between two plateaus of the calibration curve. As Figure 6 shows, this really separates the sheep from the goats. The SRLR and objective Bayesian methods continue to provide virtually perfect probability matching. But the mismatching from the posterior CDF points resulting from a uniform prior Bayesian method is truly dreadful, as is that from HPD regions derived using that method. The true calendar age would only lie inside a reported 90% HPD region for some 75% of samples. And over 50% of samples would fall below the bottom of a 10–90% credible region given by the posterior CDF points using a uniform prior. Not a very credible region at all. Figure 7 shows that for the next range, 1100–1500 years, where the calibration curve is largely flat, the SRLR and objective Bayesian methods again provide virtually perfect probability matching. However, the uniform prior Bayesian method again fails to provide reasonable probability matching, although not as spectacularly badly as over 1000–1100 years. In this case, symmetrical credible regions derived from posterior CDF percentage points, and HPD regions of over 50% in size, will generally contain a significantly higher proportion of the samples than the stated probability level of the region – the regions will be unnecessarily wide. Finally, Figure 8 shows probability matching for the mainly steep 1500–1900 years range. Results are similar to those for years 100–500, although the uniform prior Bayesian method gives rather worse matching than it does for years 100–500. Using a uniform prior, the true calendar age lies outside the HPD region noticeably more often than it should, and lies beyond the top of credible regions derived from the posterior CDF twice as often as it should. The results of the testing are pretty clear. In whatever range the true calendar age of the sample lies, both the objective Bayesian method using a noninformative Jeffreys’ prior and the non-Bayesian SRLR method provide excellent probability matching – almost perfect frequentist coverage. Both variants of the subjective Bayesian method using a uniform prior are unreliable. The HPD regions that OxCal provides give less poor coverage than two-sided credible intervals derived from percentage points of the uniform prior posterior CDF, but at the expense of not giving any information as to how the missing probability is divided between the regions above and below the HPD region. For both variants of the uniform prior subjective Bayesian method, probability matching is nothing like exact except in the unrealistic case where the sample is drawn equally from the entire calibration range – in which case over-coverage errors in some regions on average cancel out with under-coverage errors in other regions, probably reflecting the near symmetrical form of the stylised overall calibration curve. I have repeated the above tests using 14C error standard deviations of 10 years and 60 years instead of 30 years. Results are qualitatively the same. Although I think my stylised calibration curve captures the essence of the principal statistical problem affecting radiocarbon calibration, unlike real 14C calibration curves it is monotonic. It also doesn’t exhibit variation of calibration error with age, but such variation shouldn’t have a significant impact unless, over the range where the likelihood function for the sample is significant, it is substantial in relation to 14C determination error. Non-monotonicity is more of an issue, and could lead to noticeable differences between inference from an objective Bayesian method using Jeffreys’ prior and from the SRLR method. If so, I think the SRLR results are probably to be preferred, where it gives a unique contiguous confidence interval. Jeffreys’ prior, which in effect converts length elements in 14C space to length elements in calendar age space, may convert single length elements in 14C space to multiple length elements in calendar age space when the same 14C age corresponds to multiple calendar ages, thus over-representing in the posterior distribution the affected parts of the 14C error distribution probability. Initially I was concerned that the non-monotonicity problem was exacerbated by the existence of calibration curve error, which results in uncertainty in the derivative of 14C age with respect to calendar age and hence in Jeffreys’ prior. However, I now don’t think that is the case. Does the foregoing mean the SRLR method is better than an objective Bayesian method? In this case, perhaps, although the standard form of SRLR isn’t suited to badly non-monotonic parameter–data relationships and non-contiguous uncertainty ranges. More generally, the SRLR method provides less accurate probability matching when error distributions are neither normal nor a transforms of a normal. Many people may be surprised that the actual probability distribution of the calendar date of samples for which radiocarbon determinations are carried out is of no relevance to the choice of a prior that leads to accurate uncertainty ranges and hence is, IMO, appropriate for scientific inference. Certainly most climate scientists don’t seem to understand the corresponding point in relation to climate sensitivity. The key point here is that the objective Bayesian and the SRLR methods both provide exact probability matching whatever the true calendar date of the sample is (provided it is not near the end of the calibration curve). Since they provide exact probability matching for each individual calendar date, they are bound to provide exact probability matching whatever probability distribution for calendar date is assumed by the drawing of samples. How do the SRLR and objective Bayesian methods provide exact probability matching for each individual calendar date? It is easier to see that for the SRLR method. Suppose samples having the same fixed calendar date are repeatedly drawn from the radiocarbon and calibration uncertainty distributions. The radiocarbon determination will be more than two standard deviations (of the combined radiocarbon and calibration uncertainty level) below the exact calibration curve value for the true calendar date in 2.3% of samples. The SRLR method sets its 97.7% bound at two standard deviations above the radiocarbon determination, using the exact calibration curve to convert this to a calendar date. That bound must necessarily lie at or above the calibration curve value for the true calendar date in 97.7% of samples. Ignoring non-monotonicity, it follows that the true calendar date will not exceed the upper bound in 97.7% of cases. The bound is, given the statistical model, an exact confidence limit by construction. Essentially Jeffreys’ prior achieves the same result in the objective Bayesian case, but through operating on probability density rather than on its integral, cumulative probability. Bayesian methods also have the advantage that they can naturally incorporate existing information about parameter values. That might arise where, for instance, a non-radiocarbon based dating method had already been used to estimate a posterior PDF for the calendar age of a sample. But even assuming there is genuine and objective probabilistic prior information as to the true calendar year, what the textbooks tell one to do may not be correct. Suppose the form of the data–parameter relationship differs between the existing and new information, and it is wished to use Bayes’ theorem to update, using the likelihood from the new radiocarbon measurement, a posterior PDF that correctly reflects the existing information. Then simply using that existing posterior PDF as the prior and applying Bayes’ theorem in the standard way will not give an objective posterior probability density for the true calendar year that correctly combines the information in the new measurement with that in the original posterior PDF. It is necessary to use instead a modified form of Bayesian updating (details of which are set out in my paper at http://arxiv.org/abs/1308.2791). It follows that it the existing information is simply that the sample must have originated between two known calendar dates, with no previous information as to how likely it was to have come from any part of the period those dates define, then just using a uniform prior set to zero outside that period would bias estimation and be unscientific. And how does Doug Keenan’s ‘discrete’ calibration method fit in to all this? So far as I can see, the uncertainty ranges it provides will be considerably closer to those derived using objective Bayesian or SRLR methods than to those given by the OxCal and Calib methods, even though like them it uses Bayes’ theorem with a uniform prior. That is because, like the SRLR and (given monotonicity) Jeffreys’ prior based objective Bayesian methods, Doug’s method correctly converts, so far as radiocarbon determination error goes, between probability in 14C space and probability in calendar year space. I think Doug’s treatment of calibration curve error avoids, through renormalisation, the multiple counting of 14C error probability that may affect a Jeffreys’ prior based objective Bayesian method when the calibration curve is non-monotonic. However, I’m not convinced that his treatment of calibration curve uncertainty is noninformative even in the absence of it varying with calendar age. Whether that makes much difference in practice, given that 14C determinant error appears normally to be the larger of the two uncertainties by some way, is unclear to me. Does the uniform prior subjective Bayesian method nevertheless have advantages? Probably. It may cope with monotonicity better than the basic objective Bayesian method I have set out, particularly where that leads to non-contiguous uncertainty ranges. It may also make it simpler to take advantage of chronological information where there is more than one sample. And maybe in many applications it is felt more important to have realistic looking posterior PDFs than uncertainty ranges that accurately reflect how likely the true calendar date is to lie within them. I can’t help wondering whether it might help if people concentrated on putting interpretations on CDFs rather than PDFs. Might it be better to display the likelihood function from a radiocarbon determination (which would be identical to the subjective Bayesian posterior PDF based on a uniform prior) instead of a posterior PDF, and just to use an objective Bayesian PDF (or the SRLR) to derive the uncertainty ranges? That way one would both get a realistic picture of what calendar age ranges were supported by the data, and a range that the true age did lie above or below in the stated percentage of instances. Professor Bronk Ramsey considers that knowledge of the radiocarbon calibration curve does give us quantitative information on the prior for 14C ‘age’. He argues that the belief that in reality calendar dates of samples are spread uniformly means that a non-uniform prior in 14C age is both to be expected and is what you would want. That would be fine if the prior assumption made about calendar dates actually conveyed useful information. Where genuine prior information exists, one can suppose that it is equivalent to a notional observation with a certain probability density, from which a posterior density of the parameter given that observationhas been calculated using Bayes’ theorem with a noninformative ‘pre-prior’, with the thus computed posterior density being employed as the prior density (Hartigan, 1965). However, a uniform prior over the whole real line conveys no information. Under Hartigan’s formulation, it’s notional observation has a flat likelihood function and a flat pre-prior. Suppose the transformation from calendar date to 14C age using the calibration curve is effected before the application of Bayes’ theorem to the notional observation for a uniform prior. Then its likelihood function remains flat – what becomes non-uniform is the pre-prior. The corresponding actual prior (likelihood function for notional observation multiplied by the pre-prior) in 14C age space is therefore nonlinear, as claimed. But when the modified form of Bayesian updating set out in my arXiv paper is applied, that prior has no influence on the shape of the resulting posterior PDF for true 14C age and nor, therefore, for the posterior for calendar date. In order to affect an objective Bayesian posterior, one has to put some actual prior information in. For instance, that could be in the form of a Gaussian distribution for calendar date. In practice, it may be more realistic to do so for the relationship between the calendar dates of two samples, perhaps based on their physical separation, than for single samples. Let me give a hypothetical non-radiocarbon example that throws light on the uniform prior issue. Suppose that a satellite has fallen to Earth and the aim is to recover the one part that will have survived atmospheric re-entry. It is known that it will lie within a 100 km wide strip around the Earth’s circumference, but there is no reason to think it more likely to lie in any part of that strip than another, apart from evidence from one sighting from space. Unfortunately, that sighting is not very precise, and the measurement it provides (with Gaussian error) is non-linearly related to distance on the ground. Worse, although the sighting makes clear which side of the Earth the satellite part has hit, the measurement is aliased and sightings in two different areas of the visible side cannot be distinguished. The situation is illustrated probabilistically in Figure 9. In Figure 9, the measurement error distribution is symmetrically bimodal, reflecting the aliasing. Suppose one uses a uniform prior for the parameter, here ground distance across the side of the Earth visible when the sighting was made, on the basis that the item is as likely to have landed in any part of the 100 km wide strip as in any other. Then the posterior PDF will indicate an 0.825 probability that the item lies at a location below 900 (in the arbitrary units used). If one instead uses Jeffreys’ prior, the objective Bayesian posterior will indicate a 0.500 probability that it does so. If you had to bet on whether the item was eventually found (assume that it is found) at a location below 900, what would you consider fair odds, and why? Returning now to radiocarbon calibration, there seems to me no doubt that, whatever the most accurate method available is, Doug is right about a subjective Bayesian method using a uniform prior being problematical. By problematical, I mean that calibration ranges from OxCal, Calib and similar calibration software will be inaccurate, to an extent varying from case to case. Does that mean Bronk Ramsey is guilty of research misconduct? As I said initially, certainly not in my view. Subjective Bayesian methods are widely used and are regarded by many intelligent people, including statistically trained ones, as being theoretically justified. I think views on that will eventually change, and the shortcomings and limits of validity of subjective Bayesian methods will become recognised. We shall see. There are deep philosophical differences involved as to how to interpret probability. Subjective Bayesian posterior probability represents a personal degree of belief. Objective Bayesian posterior probability could be seen as, ideally, reflecting what the evidence obtained implies. It could be a long time before agreement is reached – there aren’t many areas of mathematics where the foundations and philosophical interpretation of the subject matter are still being argued over after a quarter of a millennium! [i] A statistical model is still involved, but no information as to the value of the parameter being estimated is introduced as such. Only in certain cases is it possible to find a prior that has no influence whatsoever upon parameter estimation. In other cases what can be sought is a prior that has minimal effect, relative to the data, on the final inference (Bernardo and Smith, 1994, section 5.4). [ii] I am advised by Professor Bronk Ramsey that the method was originally derived by the Groningen radiocarbon group, with other notable related subsequent statistical publications by Caitlin Buck and her group and Geoff Nicholls.
2019-04-21T01:15:59Z
https://www.nicholaslewis.org/radiocarbon-calibration-and-bayesian-inference/
This has being going on for two years i thought it would fade away but it has not,I have made a official report to the police, I am a former british and world international boxing champion, And i think that is one of reasons that this has been happening, This is effecting my entire life their are a lot of people that read this and take it for fact,I have been depressed for the last two years. I cant walk down the street with out them making derogatory remarks about me. I will be taking legal action if this harassment does not stop. Will Haines of Arnold, Missouri has put in my email address when setting up his twitter account. I'm getting some of his mail (such as Twitter requests to confirm his account!) and I can't set up my own twitter account. I can't contact him, can I? Twitter, please ask him to use his own email address. They suspended my account, which I feel was unjustified. Then unsuspended it yesterday evening. I cannot log in. It keeps saying sorry, but the information cannot be found. I donnot have access to this account. The follower an following have been deleted as well. This occurred yesterday evening, after I recieved the email from customer support. I have been trying to get feedback as to when these issues will be corrected. Still have no response as to if these problems will be corrected. I have sent them emails & tweets in regards to changing my primary email & password and so far nothing, can’t sign in. My twitter account was suspended for no reason, can you tell my me why this has happened, I am disappointed. Hi Twitter, I am aghast that twitter has been harrassing repeatedly in my name to my contacts saying that I am waiting for them to join me on twitter. PLEASE TELL PEOPLE THAT YOU ARE GOING TO DO THIS WHEN ONE SIMPLY PRESSES “IMPORT YOUR CONTACTS” which I thought simply meant I could see who I knew on twitter and personally contact them myself at my discretion from my email contacts – this is VERY Misleading and DEEPLY disturbing. I am getting upset friends and acquaintances contacting me thinking I personally sent that. VERY UPSETTING – OVER TIME OF NUMEROUS WEEKS – VERY BAD TASTE IN MY MOUTH. PLEASE CHANGE YOUR POLICY – THIS IS FRAUD AND HARRASSEMENT. I believe that you sent me an email saying that account was unsuspended so how cone it says my account is still suspended, I my other complaint I wrote I made it clear that my mistake will not be treated so im still befuddled about what’s going on right now, I am incredibly angry for the inconvenience you have caused. My twitter account was suspended due the idiosyncrasies of a companion of mine and due to this I can no longer access my account because it says invalid name or password. I am sorry about this childish situation with my friend and I hope you can un suspend my account for me to correct the wrong that has been done. Forgive this incident and I will do everything in my power to never let any violation of your rules to take place. The rule that was broken was churning which I did not know what it was until I searched it up on the Internet, I am unbelievably sorry for this unfathomable thing and too are not very happy with one of my friends. I hope that you can forgive this and I give you my word that none of this behaviour will take place again. hi, I am writing to you to complain about the fact me and my friend created a fan page on twitter and after a few days it would kick us off and wouldn’t let us back onto the account? And I don’t know the reasons for this,it is the fourth time we have made an account and it has kicked is off. I’m really not happy about this. Is they anyone sinle reasons for this? Could you please let us know as soon as possible and get back to us. Hi, my twitter is playing up it is saying my username and password are incorrect well Iv changed my password and its still not letting me tweet Its saying I can’t be found its happened before can u fix it or shall I make a television complaint and make it worst!!? Just over 2 weeks ago, my Twitter account was hacked in the early hours and my account was sending out a Get Rich Quick Tweet. When logging onto Twitter Help, it suggested I changed my password, which I did, but 24 hours later, I was sending out Spam to all my followers. I think this was due to me opening a link from a follower claiming he had discovered a Twitter app to see who was following you – little did that I know that @ermcarter (carter brandon) had been infected with a virus too. I only have internet on my BlackBerry, so my cousin de-activated my account for me & both he & I on separate occasions tried to access it, we couldn’t because of the de-activation, so we knew it had worked. However, I am still receiving followers and a friend says someone is using my old account @KaraWalmsley. I have sent 2 complaints (the first via API, which was passed on to the relevant dept) but I have heard nothing and am disgusted at being ignored like this, especially when I told them someone is masquerading as me & I do not want people conned via account with my nam. Help! I am being ignored by Twitter. I have sent 2 complaints to them to inform them that because I have been hacked once & contracted a virus, I de-activated my account. I have since found out that someone is using this account, plus I am still receiving followers. Twitter just don’t seem to care about this serious issue. My old account @KaraWalmsley was under my name, and if someone is using my name to scam people, this is very serious indeed! Twitter says they set limits on tweets per-day for those who have an account with Twitter. Twitter states on it’s Current Twitter Limits page that, “Updates: 1, 000 per day. The daily update limit is further broken down into smaller limits for semi-hourly intervals. Retweets are counted as updates.” Well; Twitter thinks 100 tweets is 1,000 tweets and stops me from updating. I’d contacted Twitter and gotten wrong numbers and emails that don’t work. Nobody at Twitter cares to listen; they send you only to their Help Center page and that doesn’t do no help to me. The South Yorkshire man who was recently convicted for joking about bombing an airport on Twitter is now just one of many who have done the exact same thing. Now, thousands of Twitter users are expressing support for Chambers, repeating his tweet together with a hashtag #IAmSpartacus, a reference to the film Spartacus, in which fellow gladiators express solidarity with Spartacus by uttering the phrase, “I am Spartacus.” I don’t really understand this, because people are supporting something that shouldn’t be a joke! I know it’s all fun and games, I just don’t think this is the time or the place to support somebody who joked about blowing up an airport on a public website. Twitter is not private, it is completely public! How can not take these kinds of threats, jokes, and anything even close very seriously given the history of terrorism worldwide? If somebody had blown up a building, but had tweeted about it earlier in the week, I’m sure the public outcry would be why something wasn’t done earlier. So I went on twitter today to check my usual friends tweets. A bright yellow “promoted” badge catches my eye in the trends section, next to a link to “McRib is back”. I don’t really think that’s something that would normally trend, so I check out the hash to see what’s up. To my delight (not a huge fan of McDonalds or the McRib, I find page after page of tweets making fun of the McRib sandwich. Looks like McDonalds plan backfired, unless their plan was to create the largest wave of negative PR for the greatest amount of money. It just upsets me as somebody who loves social networks as a means of staying up to date on news and connecting with other people. I guess it’s the form of monetization that Google following showing “sponsored listings”, that otherwise would never have showed up in organic search results. If you are going to have trending topics great, if you are going to have people pay to fake trends, then place those somewhere else so I can ignore them. This is just passing off social networking to the highest bidder, and I don’t like where this is headed. Am I alone on this one? Oh well, at least it didn’t work out like McDonalds wanted. They trend the McRib on Twitter, but people are smarter than that. This is another example of somebody wanting to be a player in social media, but having no clue how social works. It is a totally different environment than a typical marketing play. I’m sure they paid twitter a small fortune for that trend kickstart, but did they really expect the entire twitter community to jump all over their beloved McRib? Twitter spam seems like it is everywhere more and more today. For every helpful tweet there are a million ones about making ten thousand dollars in five hours, or helping cure cancer, it just gets annoying after awhile. Is this just me here? I like to follow lots of people on twitter for good info, but lately people tweet reply @ me and they don’t even know or me and just want to scam me to buy some weird product. I had trouble setting up my Twitter account and directed a query by email to Twitter Support. I never heard back. I sent a second query... never heard back. In frustration I sent 4-5 email queries and got back 7 replies that I should go to their customer support site. Worthless. So I cancelled my account and opened a new one. Apparently they tagged me and let me set up my account that only worked half the time. I could tweet along for a few days and then I couldn't. I submitted reports and was ignored. After a few days the jerk in support let me tweet again. For a few days. Then it started all over again. I must have contacted customer support a good dozen time over the three months I had the account. At best they told me to reset my password. Worthless. I must have really upset support the last time 'cause I sent them 4-5 emails explaining what I'd done to fix the problem and asking them what they'd figured out. I knew they weren't even looking into the problem. Anyway this last time, after they let me tweet again, I was tweeting away and discovered they had locked me out of my account. Tried the proscribed way to remedy the problem, but they wouldn't let me. Emailed them to shove it and now I'm suspended, probably for life. I don't care. I feel free of them now and can move on. If there were a negative rating for them I'd go with -5. I desperately need to delete an old twitter account and I cannot due to lack of proof the old account is mine. I need your urgent attention please, I might likely gonna loose my active twitter account, since I can't log in via my computer again. I forgot my password and I used all the channels available to retrieve the forgotten passwords all in vain, that I exceeded try again. My account was hacked and I contacted Twitter to try and remove the 3500 likes that had been added as well as 250 dubious new followers. I had not used my account for a couple of years and asked them just to wipe it clean if necessary. Their (macro) response was "Unfortunately, we aren’t able to help with this issue." When I tried to follow it up I got another macro email saying the case had been closed and I would need to open a new one. I tried again via another department and got exactly the same response. "Support" is not how I'd describe the service, there is no support. Twitter in the windows app store has not been updated at all. I'm complaining about the lack of updates towards twitter on windows phone. It is nothing like apples app stores application and is nothing like google play stores application. You guys should really update the application on windows. Go check out the reviews on twitter on the windows phone. Everyone wants an update. Please listen to the people. I normally follow approx. 28 groups and have 5 followers. The last time I posted to twitter was 2009. I follow a couple of conservative groups. My family has been in the US 10+ generations, and I have no foreign interest or loyalty. In Twitter, my account was added to 350 groups including anti-American and Arabic speakers. Twitter threw in some Anime and big breasted women groups, I assume for their amusement. I found Twitters problem reporting mechanism is circular, and completely useless. It dead ends with: 'change your password'. Next stop is their agent for service. I don't dare cancel the account because I cannot be sure Twitter won't invent something on the back end for more mischief. If you don't have a Twitter account, don't get one. If you have one that is low or non-use, check it periodically to be sure you haven't been added to bad groups without your knowledge. Why did Twitter take down hashtag #DNCLEAKS? Are you censoring free speech now? I Tweeted @Support and received no response (however I do acknowledge that they may have not seen this, due to my issue being Tweets not being seen). I reported my issue via the Twitter Help Centre. This was five days ago. I have received no reply, no update, not even an acknowledgement that my report has been received. The issue has persisted and myself and my followers are becoming very distressed. Several of my followers have also Tweeted @Support and submitted reports about the issue and none of them have received any response either. I find this to be appalling customer service. I can't even confirm they have my report! Their support email also is no longer monitored, as per an automatic reply I got when I tried that avenue. I would have made a complaint to Twitter directly to make further attempts to resolve this matter, but of course there is no means of doing so. This is just not good enough Twitter. Please delete @RollinsGupta this is a fake account and this guy stole my identity and my information and pics. When ever I try to login in to twitter I am unable to do so. I am being told that my computer does not have access to view twitter. I am able to when not logged in look at other peoples pages. Now I would like to no if this means that I have been blocked from signing in. If this is the case I would like to know why. The email address I have mentioned above is the email address I use to sign in with. I cant login to my account anymore simply because i have to change a password through which whose email and password i forgot. Come on Twitter help me out! Upset and need to know how to block an account on Twitter. Images attached. My follower count is static (3550- 3553) for the past few months. Whenever i deactivate my account, the count becomes another figure (4000), which i think is the right one. After some hours it comes back to the old static number. I have escalated this several times to Twitter customer support, but they do not get back to me. Somebody changed my location to Tokyo and all of my words were translated to Japanese. My Twitter handle was changed as well and now it won't let me change anything back. I am now suspended from my account and I can't do anything. Okay first of all I searched up name (Isaac) under images, I found my old twitter accounts profile picture and I don't want my pictures going out to the public. I should sue Twitter corporate headquarters for posting my picture on google without my consent. Good day Twitter corporate office, this information is for CEO Jack Dorsey. In 2004, I created the name twitter. I am on a satellite communication system that created our nations internet system. This computer internet system can understand all of my thoughts, and some how you get my information that created twitter and all of the features. So I feel these facts are wrongly taken and created a profit for you, so I believe I should get paid a percent, like half of your business because I have half of your computer system how it also works with me daily. I hope we can work this out peacefully. From more than a year, I am facing problems when I do conversations with someone. No one gets notification of my reply on Twitter. And when I retweet them than also no one gets notification. I did complaint on Twitter support and playstore too, but no one responded. When I reply someone, he doesn't get notification... when i retweet someone, no one gets notification..o no one is getting notifications by me and Twitter has no customer service number. Is this not 2016? I opened a twitter account about a month ago. I did not like the fact that it is nearly impossible to make your statement in so few allowed words w/o using ridiculous abbreviations. I forgot my password & since twitter makes you enter them every other time you go on, I should have written it down. would not accept my password & was impossible to retrieve it from twitter. (said they sent me a code on my phone & 3 times they didn't send the code. Finally gave up and opened new acct. I tried to get help b/c I couldn't see other tweets on foxnews. tried everything and finally gave up again and closed acct again. I am done with your site as it is just too much of a hassle. I can see why your stocks are tanking. Facebook ALWAYS works so guess I will stay there. you need to have a better help section too.I typed in question and just got closest answer the site had but not answer I needed. Bye bye twitter. Recently, unless I agree to give permission to access my photos, Twitter won't let me post. Outrageous! First, my personal account was suspended for supposed automated or bot behavior. I only schedule tweets via TweetDeck but not regularly or even on a daily basis. Other connected apps include WordPress.com, and Twitter for Android among others. Regular number of tweets would be 25-30 a day, and would include replies, a number of favorites and retweets. I do not promote anything on my page at all and it's actually quite personal so I was surprised it got locked. Trying to unlock my account took a while because of the delay in sending the verification code. Some codes came in a day late and so when I tried inputting it, I got an "invalid code" during my attempt to unlock it. I was able to restore my account about three days after, but also got an email my account was suspended "due to multiple or repeat violations of the Twitter Rules". I was not informed whatsoever of the specific violations I committed. Again, my account @francoexists has not been used in any shape, way, or form for malicious, spam, or commercial purposes. Second, when I finally got back my account I was able to tweet for a day and a half before I started encountering issues this time in tweeting and sending direct messages. I noticed it first using the Twitter App for Android wherein I would repeatedly getting a "failed to tweet" notification. It was when I used my web browser that I found out the reason, which is Twitter Error Code 226. This has been happening for a week days now since January 17. I am again not sure why Twitter is tagging my tweets as spam or as malicious when I do not promote or sell products nor use any automated means apart from the Tweet Deck scheduling which isn't even on a regular basis. The only automatic links I send are usually via Wordpress.Com (automatic sharing upon publishing a blog). I tried the other solutions recommended here: https://twittercommunity.com/t/error-code-226-this-request-looks-like-it... but to no avail, including logging out and resetting my password. In fact, I have reset my password four times already. This is really frustrating me because I do not understand or know the violation I committed if any, and because I am not getting any troubleshooting support from Twitter for my account. For the record, I have also revoked access to all Apps connected to this account of mine. I have two accounts and I the other one is working normally for both desktop and mobile devices (use multiple devices when it comes to twitter). I have received no support or response from your Support Team whatsoever. I have tried replying to the tickets I filed but only to get a response that "the issue" was resolved. I have filed over five tickets regarding my inability to tweet and not a single case number was given to me. I have deactivated my account and continue to get messages that it has been reactivated despite me messaging them on 3 occasions requesting they delete the account and yet it continues to be hacked and reactivated. Their system is obviously NOT secured and I don't know how to get them to actually do as I request. I am beyond irritated with Twitter. Can you advise how one gets them to delete your account? I've been charged with a crime under section 127 of the telecommunications act and I go on trial 27January for crime I did NOT commit. I'm innocent contacted Twitter opened 4 files despite showing them that username used to send message isn't mine they won't confirm this fact all they repeatedly say is account is deleted. I am due back in court on 20 January for last chance before trial begins a week later. If you can help me get a positive reply that proves my innocence from Twitter it'd be much appreciated. I don't do twitter but some woman is using my email to join? twitter which makes no since are names are nothing alike and I don't do social media for personal reasons. The email I used above is my new email address. And I don't remember my username. I think you have got this old, now defunct email address. I want to change it. When I read your help pages they always tell me to first sign in to Twitter.com and then hit the icon which will give me a drop down list of things including what I need, account and then settings. But when I put in twitter.com I get all these faces etc. Nowhere do I see the icon the help pages told me to click on. The icon is simply not there. I always imagined the makers and operators of something like twitter were very bright people. I am now seeing this is not true. Now can you or can you not tell me how to change my email address and username, and please don't tell me to sign onto twitter.com like all your instructions do because there is no icon there to access any drop down menu with account or settings. I have been unable to delete my latest tweet although I have tried many times; The link in the tweet does not go to the poem to which I am directing readers and followers. All my other tweets have done this without a problem or I have been able to easily delete where there has been an error. What has happened to the delete button? If you cannot help, I will delete my account altogether because this is misleading for readers and followers. I write to complain about Twitter placing tweets with me that are 'extra'. I am happy to receive retweets, and I appreciate Twitter needs to send promoted tweets, and also suggestions for others I may wish to follow. However, I object most strongly to being sent tweets by Twitter itself from people I do not follow. Nowadays I am receiving tweets from people followed by people I follow. One of the joys of Twitter is the speed with which I can assimilate information. Unwanted tweets slow this process down. Is there any way I can block unwanted tweets generated by Twitter itself please? Their customer service is not all that great. I received an email on my job stating I opened up a Twitter account. I have not opened a twitter account do not know anything about it. I would like a report ran to verify who did this as I believe someone at my job is trying to disgrace me. After I found out about this I tried to find the Twitter customer service phone number, and found nothing. How can I contact them and get help to shut this down? I tried to respond to my tweets. The system is just not responding. Please fix it as soon as you can. Twitter has locked out my account without explanation, and now they are blocking posts retweets and follows. Have not violated any of the rules. This is a personal business account which has been professional from day one. I have not violated copyright laws, harassment regs or used indecent or foul language. Twitter has not provided any answer or explanation as to why they are blocking and selectively cutting off or shutting down parts of my account. Some twitter users are posting inflammatory information which solicit immediate and heated responses. I responded to a "tweet" and my account was later locked by twitter because someone complained. I was never offered an explanation nor asked to delete the "tweet". I sent an email to twitter asking or more specific information but my account remains locked. My opinion is that twitter should not allow organizations to have twitter accounts, when these same organizations are posting inflammatory information that solicits angry responses. Many political action movements are now using Twitter and using twitter to gather support for their movements, and Twitter, for some reason, caters to, gives preferential treatment and protection to these organizations and their administrators. It's extremely unfair. I can't lodge a complaint with twitter, because they only have a small office to handle these complaints, and when they do they only give short generic responses with no long term answers. I use twitter to talk with my coworkers and friends. Twitter should not be so "sensitive" to lock accounts. The so called "support" team on Twitter are utterly, utterly incompetent and allowing blatant racism and homophobia attacks and abuse to keep on going. This is hate speech which in my country is a crime. I and my son have reported it multiple times with hundreds of tweets in total as evidence. Do they do anything? Do they hell! They say it "doesn't break their policy" which is ridiculous because it clearly breaks the law! I have had it with these lazy, incompetent and clearly hateful morons! The safety of their users is not in their best interest apparently, when just a couple of years ago, they took swift action on this type of abuse. They now ALLOW IT! Utterly utterly disgusted and outraged that they're allowing this to continue and don't care. I'm so angry that innocent people are made to suffer because of this. My 14 year old niece committed suicide after one of these attacks on Twitter just months ago. Facebook stepped up and changed their policy, made more reporting options, and came down hard on people who broke them, as did Instagram and YouTube! So why can't Twitter? I don't understand. I'm so ashamed of this but they're cowards and won't let me file an official complaint. It's funny how they give no contact information for their company! I had an account with Twitter for some time, I wouldn't use it that frequently, when I logged in, I'd find 1000's of people I'm following, that I didn't follow, my account was being hacked on a regular basis, never had this problem with any other social networking site. I logged in one day to find my account had been suspended, when I appealed this, I got an automated response saying it had been suspended due to multiple violations, even though I told them it was due to being hacked. So I appealed again, this time slightly more frustrated with the poor service, I got another automated response stating the same. Why should I be punished for Twitters sub standard security. Twitter, I'm disgusted by your lazy, incompetent customer service. You seriously need to sort it out since you clearly have a serious security problem which users are being penalized for! The website is no help at all for customers. When I logged into my account to deactivate my account I follow the steps to the letter, and I scroll down all it to the bottom and the deactivation button was not there. When I went to the help line on the website to see if they could help me with deactivating my account. And I looked at all the information and then went back to the same spot where I was to see if I missed it and it still wasn't there. So I left a comment at the helpful line on the website and it did not submit. So I just want my account to be deactivated and Twitter to provide some customer service for a change! I do not have twitter account but it says my number is in use. What the hect? I do not like the idea of the new algorithmic news feed. I follow several key weather stations for updates. I am the Social media geek with a specific handle. I've so many parodys and impersonations such that complaints from my fans and business associates. I believe the only way out is to help verify my account by giving it the blue check mark. I've done all that there is needs to be done but with so many false affiliated handles, all I seek is the verify badge. I would be eternally grateful. 8 weeks unable to access Tweets, follows! If I go into add account, after I enter my password I may, or may NOT, be able to scroll thru current tweets. A previous account had blocked tweets that we're anti-immigration or Barack Obama. Tweets noting the numbers of casualties for American wars were simply NOR pistes if they mentioned Mexican American War. I had trolls telling me it is a done deal, Browns moving in, Black/White move out. This is Political harassment. Please fix it. Twitter has put the following before my Web Site. The link you are trying to access has been identified by Twitter or our partners as being potentially harmful. This link etc. And as your partner is the Australian Commonwealth Government I believe that this is 'Conflict of Interest' and you are attacking my freedom of representing all these people whose rights have been ignored by these people. Cannot ever follow back my followers. I get a message that I have hit a follow limit. Seems I am always at that limit. That really is my complaint and I don't know what else I can add. The customer service at Twitter is also non-existent. When I tweet does not show up in my timeline. I think has not worked and tweet again...ending up with multiple duplication? Also a photo of little creek was removed not by me from my tweets. This was about play snuggle pot and cuddle pie, a real play well known to Australians by author May Gibbs and they NO reason to remove it. I was taken for $27.50. I was approached by a Sam Smith, who said he would like to sale The Walking Dead AMC on twitter. I think it is the fan page. He asked me if I would be interested and had a long conversation with him for a long time. Now He talked me into buying it telling me I could make a lot of money off of it. Said he needed the money because he was down and out. I am a sucker! When I sent the money by PayPal, he Blocked me from talking to him and also from the Walking Dead Fan Page. I have kept all conversation. I either.want my Money back or the Twitter page. I am very uspset that I cannot even go on Twitter without being taken. I complain that your customers have writing bad things on Twitter about personal information and helping the bulls and slander and harassment on your site the belk avenues mall in Jacksonville and all over world by other churches hurting someone else's who lived a Christian life but your site been helping hurt other person please stop the going on before she gets hurt she lives Jacksonville her name Anne Alvarado. I am the owner of numerous trademark, copyrights and other intellectual property registrations throughout the world in respect of a variety of products. The twitter account that you have listed infringe on one or more of my intellectual properties. Indeed such page violate US, India and other countries’ Trademark law, in particular U.S. Trademark Reg. No. 4,336,671 and International trademark 1 119 694 to the word mark resqme in class 9 for phone and internet apps. So please be advised that I have already notified the party owning such twitter account and today I demand that you immediately remove such account until they keep using my trademark for the above-mentioned accounts and any related products. I can't believe you would allow someone to post a horrible image of a horse getting shot like Chad Shanks did on the Houston Rockets twitter account. You should ban him and ban the rockets from tweeting for a month! This kind of hate has to stop on social media. Mmy Account is being hacked and some of my tweets are being removed as well as pictures. I would like to download my twitter archives and have them sent via email but cannot get in touch with Twitter. Anyone know the information to Twitter customer service? I have repeated love you tried blocking my Twitter account instead of the wrong spammers although it appears blocking other users never has success. I continue to receive instant messages titled twitter from this sender.
2019-04-21T23:17:34Z
http://www.hissingkitty.com/complaints-department/twitter?page=2
TOP reader and street photographer Simon Robinson, who knows how to compose in the square, tells me he has lately been spending most of his time publishing 'zines (his own work and the work of others) under the Fistful of Books imprint. Simon, whose entire name didn't appear anywhere on his old website until I mentioned it to him(!), was born in England, raised in New Zealand, and has lived in Scotland since 1996. David Lykes Keenan: "Simon just released a small book of my photographs taken in the Croatian city of Vukovar 10 years ago. What a great job he did in selecting and pairing the photographs." Simon Robinson: "Thanks for the encouraging comments! The photograph was taken on a Minolta Autocord—I love the square format and have had many 6x6 cameras (Rollei and Minolta TLR's, a couple of Mamiya 6's and a Bronica SQAi). The barber shop is called 'Mohair' and has now moved from King Street (where this picture was taken) around the corner to Trongate. I would really love a digital TLR!!" Last week almost all the posts here were about equipment—"gear"—and a fair number of people complained. Two readers stomped off in a huff over it, one telling me that I was just a "click baiter" and the other opining that the heyday of blogs is over. Heh. Like I ever cared about blogs. I like photography; blogging is just how I happen to be talking about it now. First it was being a student, then it was being a teacher, then it was writing magazine articles, then it was writing multiple columns for a variety of outlets; for a short time it was the PDML and the LUG and other forums; now it's TOP. Some of those outlets worked better than others, but it's all the same. You need a window to the World. Photography's mine. I keep saying this, and I say it jokingly, but I'm serious: the New York Times is "the World's Best Photography Magazine." It's very much worth subscribing to it just for the photographic content, and I mean it. Its photography content across many different categories and sections is (small-"c") catholic and encompassing, cultural and historical, richly visual. It covers news and obituaries, profiles photographers, reviews museum shows and galleries and books, presents a wide range of portfolios, and regularly takes a deep look at a very wide range of cultural stories related to photography or inspired by it. You do have to poke around to find it all, though. The Lens Blog is the main place for portfolios, but photography content is here and there in the newspaper and website and magazine. Pops up in all kinds of ways in all kinds of places. The Internet is awash in gear sites, where we happily natter on about shot noise and the forensics of the lens image and whether the X-trans sensor is or is not free of moiré. As a counterbalance—because its content is not similarly mirrored far and wide—the Times is as valuable as any random ten of these. Case in point: the recent article "Love and Black Lives, in Pictures Found on a Brooklyn Street." (Copy that and Google it.) A reporter in Brooklyn finds an old photo album set out for the trash collectors. She takes it home, and gets curious about the people in it and their lives. The editors give her the go-ahead and let her have a researcher and a photographer. And gradually, she uncovers the story—along the way, honoring the lives of the deceased people in the photo album and, by extension, others, not pictured, like them. It's a lovely article that I thoroughly enjoyed, and you really shouldn't miss it. Illumines brilliantly (and compassionately, and nostalgically, and in proper historical context) one of the most important of photography's many prismatic facets. You can't get that from some overlong YouTube video of an Asian teenager wandering city streets taking random snapshots with a very expensive camera. You can't get it from me, either. 'Little England': Romford Market, in Havering, in operation since A.D. 1247. Photo by Andrew Tesla for the New York Times. And of course there is original photojournalism, too, which is getting as rare as endangered tigers. For example—and this really is just a random example, it happens be to be what I was reading just now, over my coffee—Andrew Testa's pictures for the article "In a Pro-'Brexit' Corner of Britain, Impatience to Be Done With It." Nothing particularly distinguished about these in particular, but they are characteristic, which is to say excellent, and it's good to leaven a diet heavy in found pictures and demotic amateurishness with some conscious photojournalism once in a while—one seasoned photographer doing his best to illustrate a particular story with deliberately honest photographs. People complain when I link to content at the Times, because they don't subscribe and they sometimes can't get to the link. So then subscribe. It's worth it. It's something you should do. You should do it. It's the World's Best Photography Magazine. I don't know a better way to say it. And there's very little about gear. Back here at home again, a word about click "baiting": actually, what helps in blogging is not necessarily links, but traffic. Talking about gear improves traffic. Last week, with the gear posts, traffic was up an average of 2,000 page views a day over the week before, and one post drew 248 comments (if you include the "Featured" ones). If I talk about photographs all this week, traffic will go down. That's the way it is. The more traffic, the more your 'umble blogger will earn. It's not the links per se. It's the numbers. And I've never been a hound for traffic, either. If I were only interested in traffic—or click-throughs, or SEO, or viral attention, or whatever (right now, YouTube is the hot way of making money anyway)—I could do a much better job optimizing it. But for me those aren't the most important things. I like photography. It's fantastic that I can make a living talking about it, but I made a living as a magazine editor too, and a teacher before that, and I'm sure there are more efficient ways to get wealthy than doing this. It's thinking about and talking about photography that I enjoy. You have to engage with something in your life—something to really get into and think about. Something to get to grips with. Photography is one of the things I picked. Pak Ming Wan: "Hear, hear. I was one of the few who probably turned off last week from your site when you went into gear mode...I especially enjoy the photo side of this site." Chas: "I would like to see (slightly) more gear-oriented posts. Not the tech detail stuff...more the user opinion...what is it like to own and use; does it feel good in the hand, how is the viewfinder in daylight, is focus, colour rendition, contrast, noise suppression etc...'good enough' for it's purpose, how do the files print...again, not the tech details but real world opinion from a person with experience!" David Babsky: "Thank you. I subscribed." Nick Cutler (partial comment): "Agree wholeheartedly about the NYT. In England our equivalent is The Guardian newspaper, with excellent, long, in-depth articles and photography. The print edition carries a full-sized center spread photograph every day; some of them are simply stunning." John Gillooly: "I came across that NYT story over the weekend and shared it with a few folks I knew would be interested. Really is a great all-around story based on the photography. It is also a reminder of how important it is to provide information on your prints. With no information and context, the people become ghosts. Knowing something about who these people were, what they did and why, gives the photos life." Robin Harrison: "Well...that was most peculiar, unnerving even. I've been reading TOP since day one, but I never expected to see a photo of Romford Market. That Uppercut is where my father used to take me to have my hair cut. My mother still lives half a mile away from where this was shot. What interest could The New York Times possibly have in this place? Just goes to show how easy it is to ignore the significance and photographic potential of what is under one's nose." Robin Dreyer: "Mike, I've been reading TOP since the beginning and for a while I thought your characterization of the Times was meant in a joking way. Then I really started paying attention to the photographic content and I realized that you were not joking and, furthermore, that you are probably right in this assessment. I have always eked out what I can from the Times without paying for it, but recent events have caused me to remember how incredibly important journalism is to our society, and I decided I needed to do a little more to support it. The Times hires a lot of good journalists so I finally subscribed. And now, along with everything else, I love having unfettered access to all of their photographic content." Brian Taylor: "Its a remarkable tribute to TOP's format that those 248 comments were fun to read! I can't remember ever reading that many comments elsewhere! On other sources, NYT may be good but I've had to cut down on subs, and am in the U.K, so would like to put in another word for the Guardian, whose previous editor Allen Rusbridger was a photography enthusiast. They still have very good coverage. Today's feature, for example, is Richard Page on 'Going to the Dogs, the Face of Modern Spain.'" More than a rumor: retailers and e-tailers are ordering unusually large initial shipments of Fujifilm's new GFX-50S. They're clearly expecting the new camera system to be strong-selling despite its high price. Initial demand looks to be sky-high for such an expensive product. That's the scoop on that. Me, I normally don't covet super-expensive things. "Sour grapes" (telling yourself that what you can't reach is not worth the cost, or aren't as good as they appear, or that you wouldn't like them) is my normal stance toward such things. I consider things like $500,000 cars to simply be symptoms of anti-social wealth inequality and not for me. Along those lines, I wonder if I'd want a Fuji GFX, too, if I could afford one without strain. Roman mosaic with Greek legend: the Delphic maxim "Know Thyself." I think it would make a lot of sense in a way. Now that I use the iPhone for casual "note-taking" snaps (something I had to force myself to start doing a couple of years ago, as you might remember), a FF or FF+ sensor camera makes sense to complement it. I learned how this works years ago when I spent a summer using a 4x5 view camera intensively (the only time in my life that I did so). I found to my surprise that it improved my 35mm shooting (which was my normal technique at the time). I had been trying to use the 35mm camera to satisfy my desire to make carefully composed, precisely framed shots; once the view camera took over that role—a role it was better at—my 35mm shooting got looser and more free. And because I had the 35mm to satisfy my "note-taking" and "visual exploration" urges, I wasn't tempted to try to take snapshots with the view camera. Anyway I can imagine the GFX-50S being the "yin" to the iPhone's "yang." The flip side of the same coin, the other end of the same stick. Instead of going with one system in the middle (I consider APS-C / Micro 4/3 to be the "middle" of the sensor-size range), go with two cameras that are each more extreme—phone camera for records, notes, sharing, and utilitarian tasks, and a FF+ sensor camera for more deliberate, contemplative, expressive, finished work. It's a theory, anyway. I wonder if it would work, or if I'd just miss the regular ol' middle-sized cameras I'm used to. I really do like those. Be all that as it may, I'm off to work on me book over the next two days. I'm going to enjoy that. I've been looking forward to it. Back on Monday. Hope you have a nice weekend your own self! Andrew Lamb: "Re 'I found to my surprise that it improved my 35mm shooting (which was my normal technique at the time),' Mary Ellen Mark said something very similar. She stated that moving from 35mm to medium format made her a better 35mm shooter and moving from medium format to large format made her a better medium format shooter. "Moving to large format just made me broke." Dennis Ng: "My D810 fulfills that role vs. the iPhone 7. I guess it is better to be even better." Peter: "I get you. The two cameras that I use most these days, and that complement each other in terms of subjects, approaches, and projects are a Minolta TC-1 (small 28mm point-and-shoot with flash) and a Chamonix 4x5." "It doesn't matter. It doesn't matter." Someone suggested yesterday that a lot of Fuji and Micro 4/3 mirrorless users "hang out" on TOP. I have to say I hate that idea. I like that those people are here, of course, but I don't like the thought that I'm making other people feel unwelcome just because I blather on about what equipment I like. So I just wanted to mention that it doesn't matter. I don't care what you shoot. Whether you shoot infrared film in a toy camera, won't touch anything that doesn't say "Leica" on it, have three of the biggest pro Canons, have never shot with anything that has anything less than a fixed 10X zoom, are proud of your new Sony or think electronics giants shouldn't even make cameras, love your Samsung NX1 (that right there might be the most I've ever written about the NX1), love your Phase One back, love the camera in your Android phone, won't shoot with anything but a Foveon sensor, are a fanatic for lightweight view cameras, have $80,000 worth of top-end cameras or think anyone with more than one inexpensive used body is a snob...we love you all the same. Honestly, it doesn't matter to me. I'm not able to get all enthusiastic about more than a few brands, because, well, I'm not able to use everything all at once. But that doesn't mean I scorn what you use. If you're happy with your photographing and your photographs, that's all that matters. Wes: "I've never understood why photographers get so emotionally attached to their gear. Make one negative comment about a particular brand on a photo forum and prepare for an attack from multiple fronts. Remember, it's just a tool. "P.S. I have an NX1. It's the most enjoyable camera I've ever used and no one will ever take it from me. It's perfect." Stephen Cowdery: "Actually, Mike, I think yours is the least 'fanboy' of all the photo sites. The Micro 4/3 format appeals to anyone who is looking for a jack-of-all-trades camera, so it is only logical that you should cover it, and that you might have a favorite 'Flavor of the Month.' At least you cover it instead of dismissing or ignoring it. A quick look at your Categories in the sidebar shows a wide range of your interests with Cameras and Camera Reviews only a small part of the whole. "TOP is also a site blessedly without the usual trolls, thanks to your moderation of comments." David Bateman (partial comment): "I just think a lot of people use Micro 4/3 cameras. More than the sales numbers imply. I look for used lenses on eBay every once in a while and I always see a huge number available for 4/3. Not so much for Fuji." Steve Jacob (partial comment): "I think that Micro 4/3 and Fuji have hit the mark with a particular demographic that largely represents TOP readers, which is that of mature enthusiasts. In other words, it doesn't surprise me that many of your members arrived at the same place you did." Dave Miller: "I think I must be a terrible brand snob, one of the worst. I own Nikon, Canon, Pentax, Fuji, Sony, Sigma, and Panasonic cameras. But not one single Olympus...." We've received a lot of comments to the previous post (below this post)—210 total as I write this. I've added a number of new "Featured Comments" this morning and there are may more new ones now posted in the main Comments section. But oh, so fine. The 20mm does have a few flaws that stop it short of being perfect—it's a bit slow to focus (not bad, but noticeably not as fast as the 12–35mm) and it's not stabilized. But its beautiful optical qualities make up for that. Flawless bokeh (out-of-depth-of-field blur) and a beautiful, smooth, 3-D look. The 85mm-equivalent is maybe even a little better...a near-perfect short telephoto, very close focusing, very sharp wide open with minimal falloff. In fact, I sometimes wonder if the Panasonic GX85 with those two lenses might be an ideal setup for a serious beginner. Not too expensive but not too cheap, everything you need to comprehensively start exploring generalist photography. If I were the teacher I'd recommend sticking with that one camera and two lenses for at least three years, resisting all temptations to add any more gear for at least that long. I probably won't be using just two lenses anytime soon. But the Panasonic 20mm and 42.5mm would be great choices if I did. Just one guy's comment to add to the pile! Rob: "I would say that the ideal kit for a beginner depends on what that beginner is interested in and who, or what, they are inspired by. When I was a beginner (back in the late '90s when I was in high school) I was inspired by David Hume Kennerly's book Shooter. On the cover of that book was a picture of Kennerly in fatigues and a combat helmet with two Nikon F's slung around him. One F had a 200 ƒ/4 mounted on it and the other had a shorter telephoto prime, likely a 135mm or an 85mm. So at 17 years old I bought a Nikon F and a 200 ƒ/4—just like Kennerly's—and that was my only lens for quite awhile before I bought an 85mm ƒ/1.8 Nikkor-H and an F3 to put it on. Loved both lenses; wish I still had them. Ever since then I have been more comfortable with telephotos than any other kind of lens—currently my favorite is the Fuji 90mm ƒ/2 on my X-E2. So my advice to a beginner looking to buy a lens would start with the questions: 'What inspires you? What interests you? What photographers do you admire?'" Mike replies: Worked for you, and that's good, but for a majority of students, starting out with just a 200mm-e prime lens and nothing else wouldn't work very well. Although your story does illustrate the advantages of getting to know one focal length well before moving on. It's not always the best idea for beginners to copy the equipment used by the photographers they admire. Those photographers are usually much more experienced and have worked through a lot of issues already. What would you say, for instance, to someone who loved David Hume Kennerly's book On the iPhone? Should they get an iPhone? I'm not saying your advice is bad. Just playing devil's advocate. I ran into David Hume Kennerly in a park in Georgetown back when I was in art school and, trying to make conversation, I asked him what camera he used. He said emphatically, "It doesn't matter. It doesn't matter." So there's his opinion on it! DA: "The difficulty I find with your advice as it pertains to what is great for you, Mike, is that you are not a beginner. You can't be farther from one. And you also have a very defined style and interest in photography, which a beginner has not come close to figuring out. I find many of the comments to be in the same vein and I am not sure how helpful they would be to a novice. "First, I would define a beginner as someone excited about photography, but without any technical (or even artistic) knowledge of it. When I started, as the saying goes, I didn't know an f-stop from a bus stop. But I loved pictures. All kinds of pictures, from pretty models on sets to planes in the sky and everything in between. I still do. "To that end, I would recommend a serious beginner get an affordable camera from Canon, or Nikon. My preference would be for the Rebel line. Get a T6i because it has the image quality and the fast controls to let you do anything you could want, and it is far cheaper than many mirrorless cameras. Too expensive? Get a used T5i/T4i/T3i. Get the kit zoom, or grab a used zoom with better specifications—especially a third party one with an ƒ/2.8 max aperture. That's the 'one lens' of today that lets the beginner really learn what she/he can do. Then add a 70–300 that you can afford. There are lots of examples for $500, or less. There's little one can't do with that kit and it won't break the bank. Learn from there and add as needed. "You don't like this advice, because it is so 'old school' and so traditional. You've moved on and it has become popular to shake one's head at 'Canikon' and how lame and behind-the-times they are. Yet, there is no system as complete and versatile as the Canon/Nikon ones. No better system to actually learn anything you want to learn using real photographic tools rather than Photoshop and/or cobbling together adapters and manual lenses. "And to the serious beginner I would say once you learn photography and what you like about it using these excellent and inexpensive tools you can switch to anything you want and pursue photography the way you want to pursue it." Mike replies: I'm not a beginner but I've been a teacher, and I think like one. And I've recommended Canon T[x]i cameras to various people in the past, including good friends about to go on big trips. I don't think they're optimal for beginners, though, and I know zooms aren't. We'll just have to disagree on that score, I guess. I've been on a "thinking about gear" kick recently. You might have noticed. I find that too much product researching and testing is just confusing. Therefore, I tend to keep my "gear footprint" low—for the last two years, I used a single camera body and two prime lenses. I don't care that the camera isn't a current model, or that the lenses might be 'bad.' On almost every outing, this gear yields me one or two pictures I'm pleased with, and that's what counts. This is another interesting gear topic. Some people feel the appeal of having a big selection of gear and switching around, or mix-and-matching for anything from specific moods to specific jobs; other people are the opposite, and feel the appeal in paring down to essentials. So let me ask you a question: if you could have only two lenses, which would they be? Extra credit if you name specific lenses. Extra-extra credit if they're lenses you actually own now. If you're the type of person who couldn't get by with only two lenses and who thinks the question is stupid, one word: understood. You're excused. Me, I see the appeal, but I'm having trouble with the question. P.S. And if you aren't in a gear mood and would rather think about something else, seen any good movies recently? Seriously, I'm looking for a few good movies to watch in the evenings, and I find myself rewatching old movies I liked years ago, which is making me feel stuck in the mud. "Open Mike" is the editorial page of TOP. It appears on Wednesdays, assuming the moon is in phase and the stars are aligned. Kev Ford: "For me it's the 35mm-e of the Fuji X100 plus the XF 56mm on my X-Pro1. Is it odd to have an ILC and only one lens? It feels a little odd." Mike replies: Not at all. Early on, interchangeable lenses were to customize cameras with, as often as they were meant to interchange. Photojournalists in particular would stick with one lens / one camera, but would have multiple cameras—they felt changing lenses on the fly would slow them down too much. Do a Google image search for people like George Rodger, W. Eugene Smith, and David Douglas Duncan, and any others of that era you can think of, and you'll see what I mean. If you can find portraits of them you'll see they carried two or three camera bodies, each with its own lens. Often not the same camera body, even. Frank Figlozzi: "I'd start with a Fuji X-E2 (a rangefinder-style APS-C camera); follow it with the Fuji 35mm ƒ/2 lens and the Fuji 18–55mm zoom; and—if you turned your back and looked in the other direction—I'd sneak in a third lens, the outstanding Fuji 14mm ƒ/2.8. All of which I own. Taking pictures is fun again!" Kalli (partial comment): "Is your 'thinking about gear' kick caused by you not photographing? I find that it usually happens for me at least once a year, usually in winter, that I, for some reason, don't photograph and then I furiously research gear instead as a substitute. That period was cut short at the end of last year after venturing out a couple of times and coming home with some photos I was happy with." David Anderson (partial comment): "I must admit two prime lenses would be one too few to be ideal for me; either just one or three would be my choice." Dale Greer: "For Nikon FX, it would have to be the PJ workhorse zooms—a 17–35mm ƒ/2.8 and a 70–200 ƒ/2.8. I need the range and speed to get the job done. For personal enjoyment, I shoot Micro 4/3 and prefer fast primes (lighter than zooms, and they regain some of the depth-of-field isolation lost to the smaller sensor). The Panasonic Leica 42.5mm ƒ/1.2 Nocticron renders beautifully and is perhaps my all-time favorite short tele for any mount. On the other end, it's a toss-up between the Olympus M.Zuiko 12mm ƒ/2.0 and the Panasonic 20mm ƒ/1.7. Both lovely lenses." RubyT: "This post describes the long-time war between my inner magpie and my inner ascetic (at the moment, the magpie is winning). If I could only have two lenses they would be the Pentax FA 77 Limited (my all-time favorite lens), and the Pentax DA 16–85mm, which covers pretty much any shooting situation I'm likely to find myself in. It doesn't render as beautifully as the 77mm, but it's weather-sealed and practical. I do own both of them right now. I took a fall onto the 16–85mm while hiking recently, and I shattered the hood, but the lens is fine (as is the camera). It's a great lens for hiking." James Dyrek: "I have recently adopted the Fujifilm system and I picked their 23mm ƒ/1.4 and the 56mm ƒ/1.2. And the 23mm is the one I keep on my camera." Ed Donnelly: "If I had to pick only two, my Canon 100–400mm II for wildlife and trains, and my Fuji 18–55mm for everything else. I use more of course, but these two are by far the most versatile and both provide excellent image quality on their respective bodies." Michael Poster: "I don't need two. One 35mm (or equivalent to that) will do. It's not that the 35mm focal length is ideal, necessarily, it's that I can always make it work." Mike replies: Well said. That's a very good way to sum up the main benefit of a lens with a 35mm or equivalent angle of view. Timo Virojärvi: "Nikon PC-E 24mm ƒ/3.5 and Sigma 50mm ƒ/1.4 Art. I have them both (and 35 other lenses)." Stephanie Luke: "You probably want to hear about a movie you can watch at home, but all I can come up with is something we recently saw at the local cinema: 'Passengers.' It got poor reviews, so I wasn't expecting much, but I thoroughly enjoyed it. I admit, I'm a sci-fi fan, and good ones are pretty few and far between. There was some great CGI and just plain beautiful scenery. It's a rather 'slow' film but I like slow. Maybe most of all I liked that it didn't have a villain, which is quite rare these days. It was plain, old-fashioned sci-fi, with a basic philosophical conundrum." Alan Wieder: "I walk the streets and shoot. Have fallen in love with the Leica Q—no issues about lens choice anymore." Rube: "If I had to pick two, I would only pick one: the lens on the Ricoh GR. Of course I would leave it on the body! GRIN." Shaun: "The 28mm equiv. on the Ricoh GRII—what an excellent walk-around lens/camera. I'm continually impressed with this camera and lens, and it fits in a pocket. I do wish Ricoh would do a 35mm version of this camera/lens. The other would the Sony Sonnar FE 55mm ƒ/1.8 on an A7rII. Own and use both." Dogman: "I like a body dedicated to a lens so there's not a lot of changing out lenses. If I had to pick two lenses only, the choice would be pretty easy. Fuji 23mm ƒ/1.4 and 35mm ƒ/1.4, each mounted on a Fuji body. Both are great lenses, nearly magical in their look. I have to add that I could also happily live with a Fuji X100 series camera with its 23mm ƒ/2 fixed lens and a Ricoh GRII with its 18.3mm ƒ/2.8 fixed lens. The Fuji X100 cameras, in use, are almost transcendental. The Ricoh has one of the sharpest lenses I've ever used." Marcelo Guarini: "I shoot Micro 4/3. My favorites by some margin are the Voigtlander 17.5mm ƒ/0.95 and the new Olympus 25mm ƒ/1.2. Both are absolutely fantastic lenses—large, but optically really beautiful." Stuart (partial comment): "Aggghh—get behind me Satan! I’ll try to keep this short…." Wesley Liebenberg-Walker: "I have an OM-D E-M5 Mark II and use the Oly 12-40mm and the Panasonic 35-100mm. I have other lenses, but if they all disappeared tonight I'd still be happy with those two for nearly everything that I shoot. (I wouldn't be happy that the others disappeared though...)." Doug Thacker: "One, two, or three lenses, and which one(s)? This has always been my favorite exercise, because it requires so much thinking and self-reflection, and paring down, and reveals so much about one's development. "For years I shot with only a 50mm. It was always a bit too long, but 35mm was too wide. I'd have preferred a 40mm, or a 45mm, maybe, but I made do with 50mm and prided myself on being able to shoot anything with it, and get any shot I really wanted. And where I couldn't get the shot, I told myself I really didn't want it. Nowadays 50mm or the equivalent isn't right at all, neither wide enough nor long enough. "When I started with the X-T1 I settled on the 14mm ƒ/2.8 and the 27mm ƒ/2.8 as my everyday walking-around lenses. But over time I find that I almost never use the 14mm. The 27mm is the one I use constantly, but it's too slow, both in terms of focus and aperture, despite being pleasingly small. "I skipped the 18mm ƒ/2 because I also have the Ricoh GR and figured it could serve as my 28mm. And in fact I now realize this is the focal length I most enjoy using. "So, when I upgrade to the X-T2, my new walk-around lenses are going to be the 18mm ƒ/2 and the 56mm ƒ/1.2, neither of which I yet own. Upgrade day is going to be expensive, then, but I have a feeling it will result in a more satisfying shooting experience, and more shooting." FKT: "I've been shooting more film than digital for personal projects in recent years. My favorite 35mm body is the contemporary Cosina/Voigtlaender R2C, which has the old Zeiss Ikon Contax rangefinder mount. I've got four lenses for the body (and an original Zeiss Ikon Contax IIa body), but my two favorites are the Zeiss 35mm ƒ/2.8 Biogon and Zeiss 50mm ƒ/1.5 Sonnar. Both are post-World War II models, and all four lenses were overhauled by Henry Scherer at Zeisscamera.com. Servicing is necessary as the four lenses are between 60 and 65 years old." Ben Rosengart: "Nowadays, I use the Fuji 23mm ƒ/1.4. The FOV fits the way I see—it could be a few degrees wider—and if there's a picture which demands a longer lens, well, I let someone else take it. In theory, I want a portrait lens too; in practice, I can happily shoot with one focal length for years at a time." Rod Thompson (partial comment): "As to lots of gear, I find the less I have the easier the process is." Steve Smith: "The only two lenses I need are the taking and viewing lenses on my Rolleicord." Mike replies: Yes, it's one of the great advantages of a TLR—no lens choice to worry about. Really teaches you how to see like the camera sees. Something photographers didn't really recover in digital until smartphones came along. Note Carey Rose's article at DPReview—he accomplished the same thing by only bringing one lens to Thailand. Another advantage of the old days was that view cameras and rangefinders enforced our knowledge of prime lenses—you couldn't put a zoom lens on a Leica or an 8x10 Deardorff. Choosing something based on simply liking it, as opposed to what's "supposed" to be the "best," is an issue that interests me. When I got into photography (here comes a digression, but bear with me), I tried various films and picked one based on the tonality and grain I liked best, even though it wasn't the "sharpest." I remember experimenting with D-76 and Rodinal and picking D-76. Rodinal was renowned for "acutance," or edge contrast, and was beloved of hobbyists; but I thought D-76 yielded better tonality and was subjectively better at rendering the volume of spaces...Rodinal looked a little "layered" to me. I learned more as I went along, picking papers and enlarger light sources and so forth based not on what was "best" in the estimation of some guru or according to general consensus, but just because it was what I liked. The details here don't matter, really. The point is just that I tried things, looked at the results, and, as I went along, picked whatever most appealed to me. It was all done by taste. Of course I did read and do research, but where reading, product research, and learning had the most effect was in identifying things to experiment with. But my own experiments always outvoted anyone else's conclusions. When digital came along we inadvertently created a strong culture of technical evaluation and comparison. Was X better than Y? Was Y as good as Z? That made a lot of sense in the beginning, when digital was a) insufficient and b) competing with film, and c) improving drastically and quickly. Now, many people think we've passed the point of "good enough," where we can look at pictures for what they are and not necessarily be wowed by the technique or disappointed by the lack of it. We've gotten to the point where we can go back to picking gear and techniques based on taste, and on the technical qualities that appeal to each of us, individually. And I tend to like photographers who have a strong, recognizable taste that shows up in their pictures, too, even if their technique is not exactly my own. Street photographer Juan Buhler likes B&W tonality that looks a lot like the aforementioned Rodinal, with the middle values moved lower on the scale. It's not a look I like for my work, but he makes it work for him and it's how he sees. I take it at face value from him and I like his work a lot. Rodger Kingston's "found" photography (Rodger is a major collector of vernacular photography) uses bold colors that work together with the longer lenses he likes, to "flatten" the images into a suggestion of two-dimensionality, which lets the viewer relate the colors to each other more readily. His color palette is far beyond what I would consider—it would be excessive if I used it—but he makes it work, and in fact his pictures wouldn't work without it. Kenneth Tanaka's clean, classic technique suits the almost architectural quality many of his pictures share, their strong sense of design, and complements the appreciative, appraising quality of his observation of cities. All three of these photographers use techniques that are very different from each other's, but in each case their technique is subservient to the work and maybe even indivisible from it. Is any of them "right" or "better" than the others? Of course not. We just accept the work for what it is, and we wouldn't want it any other way. Whether someone uses a 1" sensor camera like Kirk Tuck has been enamored of recently, or a Phase One back, the resulting work will either work for us or it won't—but not necessarily because of technical choices. What matters is their taste in technique based on what they're trying to do and say. What matters for each of us now is not what's "best." It's more like what each of us happens to like. In other words, it's getting back to the way it should be. ...And by "little" I mean 13x17.3mm! So far, the only cameras that have it are the Panasonic GX8, the new super-duper (and super-expensive) Olympus E-M1 Mark II, the Olympus Pen-F, and the forthcoming Panasonic GH5. Is it "better"? Well, I confess to not being too concerned about that. I like it, though, which concerns me more. I like it a lot. I love the "grain" at ISO 3200...what I mean is, its noise looks a lot like film grain. And the images at all speeds have a certain "bite" to them that bigger sensors with "smoother tonality" seem to struggle with. I've only used the GX8 with the Panasonic 12–35mm (currently on sale for $300 off because it's being replaced with a slightly revised version). It must be the most flexible sensor yet in Micro 4/3, too. It corrects easily. I even (gasp!) like the look of mildly HDR'd images (like the Impala in the "Go, Bernie, Go!" post linked in the first caption). Finally, it might not have the most detail, but it's got a really nice way of rendering detail that I find pleasing. I've always liked high large-structure contrast (the lowest lp/mm line in an MTF chart), and this new sensor is good at that "look." Maybe that's the lens, too. Note that the blog software kind of tromples the value of these illustrations qua evidence, but if you accept them as mere illustrations and just take my word for it you'll get the basic idea. match the visual impression of the scene. The file holds up well. It's just that I really like the look of the pictures. And it seems to do all the technical-checklist things well enough. No shortcomings that I discovered during the time I had with it, or while geeking-out over the files of the pictures I took. and noise/grain that I find appealing. Please note I'm not saying I don't like the look of other sensors. (The one in your camera is particularly nice, so no worries!) Just saying I really do like this one. Nigel: "One way the sensor tromples all over the competition is readout speed; in the Olympus E-M1 Mark II it enables 60 frames per second shooting. Looking forward to the GH5 when it comes out...." John Sarsgard: "I think I've mentioned in an earlier post that I love my GX8 and this sensor. I do not expect it to compete with my 4x5 in rendering detail, but I love it like I would likely love the new Leica digital if I had the money. "But maybe I love it more because it is a democratic camera. One does not have to be wealthy and attracted to the artifacts of wealth to enjoy a camera this good that has a sensor this good. And it feels wonderful in the hand. If you don't like the way it works, almost everything is customizable. The silent shutter is elegant. People that know a little about cameras see me shooting on the street with it and ask if it is the new Leica. I tell them it's the democratic version." Kivi Shaps: "Been using the GX8 for four months now. Got it in the 12–60mm zoom lens deal. Promptly sold the zoom and started to experiment with different primes. Settled on the Leica 15mm ƒ/1.7 (30mm equivalent). Love it for street shooting and even did two indoor events with pleasing results. I came over from a Nikon D7200 and a Leica M9. Peachy." Rico Pfirstinger's "X-Pert Tips" book for the Fujifilm X-T2 is out from Rocky Nook. ...Just in case you own, or have ordered, an X-T2. If you ask me, the X-T1 (still available) was already fine...always fast enough for me, excellent controls if you're partial to the "see where it's set" knobs-'n'-dials style of camera controls, and it has plenty of pixels at 16 MP. But Fuji took the X-T1 and exhaustively refined it, creating a markedly different experience and improving dozens of details comprehensively. I don't know how you get to "much better" from "really good," but that's basically what we're dealing with in this case. Anyway, I have Rico's The Fujifilm X-T1: 111 X-Pert Tips (now in its Second Edition), and it's good—one of the most accessible and easy-to-digest books of its genre that I know of. I have to assume that this edition will parallel the improvements in the camera itself—similar but even better refined. Rico has written similar books for the other major Fuji models as well. Armond Perretta: "Regarding Rico's X-T1 book (2nd Ed), I may be a slow learner but each time I go through this tome (in detail) I find there's just that little thing I somehow missed on first viewing. BTW I'm a careful reader. Great book combined with a (still) great camera body." Kent Phelan: "I have been reading this book since it came out. I pre-ordered it long before the release. Fuji X-T2 owners: this book is a required accessory! I bought the e-book, which makes it easy to find exactly what you are looking for. Want to know about an obscure item four menus deep? Rico's got you covered. I will never buy a Fuji camera again without Rico's companion guide." So what if you don't need no steenkeeng medium-format digital Fuji? According to FujiRumors, B&H Photo has found a few brand-new Fuji GF670's in its warehouse, and will be shipping them on the 25th. The GF670 was a new, modernized version of a classic medium-format folder. Unlike most classics, it is dual-format—you can set it to shoot square or 6x7. The lens is a very fine 80mm ƒ/3.5. The camera was jointly developed by Fujifilm and Cosina, was built by Cosina, and was also sold as a Voigtländer. Introduced in 2008, it ended its run a couple of years ago. And if you'd like an authentic, historical folder, check out Jurgen at Certo6. He restores older folders to working condition. The "Rolls-Royce" late models are the Zeiss Zuper Ikontas and the superb Agfa Super Isolette, but an earlier one might be more fiddly and more fun, and a better conversation piece. Looks like your very last chance to buy an '08 GF670 new. And if you get one, please also order some of this and maybe even some of this! TOP: a modern replica of the aging original. Peter Wright: "Interesting post! The only medium format cameras I own are a GF670 bought from B&H, and an Agfa Super Isolette bought from Jurgen at Certo6! (The Agfa came with the original leather case.) I've had both for several years now and can't imagine parting with either. These are genuine coupled ranger-finder cameras producing great negs or slides and really satisfying to use. (I tried out Hasselblad, which is better built, but never hit the sweet spot for me.) I really think an extended period with one of these cameras will teach you lots about photography—something along the lines of the 'Leica for a year' approach." Andrew Lamb: "My favourite camera is a 70-year-old Super Ikonta. Wonderful bit of engineering. Have owned it for 25 years and would never part with it. On a point of order, Jurgen also sings the praises of the Bessa II but I don't think they're a patch on the Super Ikonta or Super Isolette. Amongst other gripes, the 6x9 neg never seems to be held very flat thus making the camera incapable of showing off the lens to its best." Stephen Gilbert: "How to tell you're a gear head: you have no desire to shoot film, but still covet the Fuji folder." Unfortunately, I won't be able to attend the Todd Gustavson lecture on the Kodak DCS today at noon...but for a different reason than I envisioned. Turns out my son and his girlfriend are arriving for their Winter visit today. I knew they were coming "at the end of the month," but I didn't know which day until this morning. So I'll be (happily!) doing housework and shopping. And cooking. That's my friend Earl Dunbar, who is a Rochesterian and was planning to go to the lecture before I even mentioned it. If you want to connect, Earl says he'll be in the Gift Shop/Café half an hour before the lecture starts at noon. Look for the guy with the crutch (he had polio in his youth) and a Domke F5 bag with a Rolleiflex in it. The Curtis Theater is adjacent to the Café at the George Eastman House, so it's a handy place to meet. Todd says he'd be happy to have lunch after the lecture, and you could very well meet some of the other pioneers of digital photography as well, such as Steve Sasson and Jim McGarvey. Earl would be happy to meet other TOP readers. Sorry I can't be there, but next time! I need to become a Member of the George Eastman Museum so I get notices about this kind of thing more in advance. UPDATE, Sunday—Earl reports: "Mike—thanks for giving a shout-out. In addition to meeting Todd and some of the Kodak DCS luminaries, John Hamilton from Toronto attended and introduced himself. It was a lovely time and we missed you!" "Here's Todd signing his book for Jim McGarvey (standing), who led the development of professional digital cameras at Kodak for 17 years [and is the author of this short history —Ed.]. Jim's talking to Ken Parulski, former chief scientist for Kodak's digital camera division." James: "Mike, now that you mention Earl Dunbar, did you know there once was an Earl of Dunbar? Does your friend know? I bet his parents did." Earl replies: "Ha! Yes, I know well of the Earl(s) of Dunbar, both historically and all the times my childhood 'friends' took that reference too far.... At one time Clan Dunbar was the second richest and most powerful clan in Scotland. Until some ruling jackass destroyed my inheritance! Slainte!" Yes yes yes yes yes yes yes yes. If you're looking for a change, we've got just the thing. Tokyo's hottest camera is the Fujimax, AKA the GFX-50S. All that, and a doorman who high-fives children of divorce. Dennis: "Stop. STOP!!! La la laaaa...I can't HEAR you!!!" Geoffrey Heard: "I live in a land of stunning scenery, volcanoes and all that stuff—Rabaul, New Guinea—and I have a couple of big scenic photo opportunities I am trying to get fit enough to take (walking up/climbing significant mountains is involved) and as a Micro 4/3 user, I can't help reflecting that a bigger sensor would be nice to have at the end of these exertions. "Aargh, forget it! Just take the ($300) tripod and shoot panoramas! They will do the job! Someone mentioned the usefulness of the 4:3 format for verticals. Absolutely! After 50 years shooting 35mm, I use 3:2 mostly for horizontal and switch to 4:3 for verticals. It is good! And on my Panasonic cameras, I can have that change on a button." In May of 1991, Eastman Kodak, the inventor of and early leader in digital imaging technology, introduced the first commercial DCS—Digital Camera System—at a press conference in New York City. Utilizing a Nikon F3 carcass (Kodak and Nikon made a virtue of this by claiming it would make the transition to digital easier for photojournalists already used to that camera), six models were offered for sale at prices between $20,000 and $25,000. It was the culmination of years of prototypes and proofs-of-concept. On Saturday at noon, in Rochester, our friend Todd Gustavson (author of the excellent Camera: A History of Photography from Daguerreotype to Digital and several other books of camera history and lore), will be giving a talk at the George Eastman Museum called "The Kodak DCS: 25 Years of Digital Photography" in the Curtis Theater. It's free to Museum members and included with Museum admission. There's an 80% chance I'll be there (I've been sick this week, hence 17% of the uncertainty—the other 3% would be the possibility of bad weather!). If any friends o' TOP make it to the talk, perhaps we can gather for lunch afterwards at the Museum Café. There's also a selection of DCS cameras on view in the History of Photography Gallery. And we can go stand under George's elephant (I'll tell the story if you haven't heard it). P.S. Note that contrary to received opinion, neither Nikon nor Canon had anything to do with the development of the early digital cameras that used their bodies. In fact, some of the bodies used for Kodak's early "newsroom" digital cameras were bought off camera store shelves. Todd will no doubt cover this in his talk. Compacted Leica: the big change in the new Leica M10, available from today at $6,595 in black and silver, is that Leica has finally slimmed down the M series, which were fattened for digital with the original M8. The M10 is closer to the dimensions of the old film Leicas. Good move, and 'bout time. The M10 (Amazon) is a significant update, not a simple model refresh. The three lenses have effective 35mm-equivalent angles of view of 95mm-e, 25–51mm-e, and 50mm-e respectively. I'm not aware of a GFX lens roadmap yet, but Fuji is among the best manufacturers at filling in its system offerings quickly and efficiently. lets you attach your 50S body to most 4x5 view cameras. (Since the GFX-50S and Leica M10 are within shouting distance of the same price, let me just ask you a thought-experiment type question—a rich friend says he'll buy you one or the other to use for the next two years; which would you choose? • The Fujifilm XF 50mm ƒ/2 R WR lens in black or silver, a 76mm-e short tele for $449. Dennis (partial comment): "From Fuji's press release: 'Three Additional FUJINON GF Lenses to be Announced Later in 2017: GF110mmF2 R LM WR (equivalent to 87mm in 35mm format) GF23mmF4 R LM WR (equivalent to 18mm in 35mm format) GF45mmF2.8 R WR (equivalent to 35mm in 35mm format).'" David Cope: "Thank you, my imaginary, hypothetical friend. I'll take the Leica. Now, while you're in a generous mood, can we talk lenses...." Paul Bartlett: "Not sure if you were joking, but I believe the F in X100F actually stands for fourth. Or so the story goes...." Miserere replies to Paul Bartlett: "I didn't know this! But it makes sense; S for 'second,' T for 'third,' and F for 'fourth.' Now...what will they use for the 'Fifth,' 'Sixth,' and 'Seventh' iterations? Maybe the marketing department never thought this through the whole way, or they didn't believe the camera would live beyond its second iteration." Mike adds: According to standard classification, those would be the X-100Fi, the X-100Si, and the X-100Sev. Or maybe at the next iteration Fuji will figure out that it can go to 101 or 200. Chris: "The diagonal is only 1.3X greater than 35mm, so this seems a camera looking for a purpose. All you get is a camera with larger, slower, and more expensive lenses with marginal IQ improvement. I don't really see the point. It makes sense to Fuji as they don't make a FF 35mm camera, but I predict this will not be a big seller. I'd take the Leica, although I can't see me being able to afford (current) genuine Leica lenses for it." Mike: I think it will be a big seller all right, in the context of medium-format digital. As far as "marginal" image quality improvement, my take is that it's meant as a complement for people who already shoot the existing X System, where, as you say, it makes sense. So it's a camera for a different sort of work, for people who already own Fuji APS-C cameras with X-Trans sensors and all the capabilities those offer. Perfect for X-system shooters who want a little extra edge for high-quality portraits, or formal wedding pictures, or landscapes, or to make big prints. I'm deeply pleased by how sensible the move is...someone is thinking not of how to exploit niches here and there in existing markets, but of creating a coherent, thorough-going, cohesive system overall that has great balance internally. I really like what Fuji is doing. How is it able to be the only camera company apart from Leica that is not dictated to by its marketing department? Marcelo Guarini: "For many years I used an M4 and then I added an M6, I still have and use them but quite seldom. I also have five beautiful M Leica lenses. I love the M system, but If I were rich, the GFX-50S hands down. Today I use Micro 4/3." Steve G, Mendocino: "Also notable in the X-100F announcement is that it uses the same (NP-W126S) battery that's also used in the X-Pro2, X-T2 and X-T20. What a great idea—let's hope other camera manufacturers take note. "Meanwhile, I think I'd go for the Leica. I like (and get more use out of) smaller camera bodies." Benjamin Marks: "That Leica M10 still looks kind of chunky to me...not that that would stop me from buying it if I found seven grand accidentally stashed in my sock drawer. But I'd say thin-er, with the emphasis on the 'er.'" Mike replies: It's unfortunate that Camerasize includes several film Leicas but only front views for each. But the M10 does look bigger than the M4 from that angle. It's the thickness that would really change the feel of it. Paul De Zan: "I enjoy reading new camera announcements more than ever before, because these days each one seems to reinforce my sense that I am as camera-ed up as I could possibly need to be. Yay...and no sale!" JOHN GILLOOLY: "Was pleasantly surprised to see upon comparison on Camerasize that the GFX-50S is basically the same size as the Nikon D500. Interesting." Speaking of books, I didn't know that this book had been reprinted...André Kertész's On Reading is slight, only six and a half by eight and a half inches with fewer than a hundred pages. Pictures he made over many years of readers reading. A minor masterpiece from the genial and charming Hungarian, who must be among the most humane people ever to have taken up a camera. Jim Newton mentioned it in the Comments to the "Engage" post. He said: "Reading and photography come together in this beautiful little book by André Kertész. Published on 1971, it is available as a reprint. If you are a reader and a photographer this belongs in your library." I'll second that. Curt Gerston: "I teach middle school photography, and this is a book I use as an example for thematic shooting (an assignment I give). It's not always easy to explain to a 12-year-old, but when I show them On Reading they get it pretty quickly. It's a wonderful little book." Martin: "I saw the exhibition of these pictures at the Photographers' Gallery in London, but it works just as well as a book; small images you can hold in your hands." Rodger Kingston: " I got my copy of André Kertész's On Reading in February 1973 while my wife Carolyn and I were on our wedding trip to NYC. We wandered into the Hallmark Gallery on Fifth Avenue in what turned out to be the last hour of the last day of a large André Kertész show. As luck would have it, who should be there but the master himself. I dashed across the street to a Doubleday Bookshop, and bought a copy of On Reading. He signed the book for us, and sometime after we got home, I wrote to him that meeting him at his exhibition was one of the highlights of our wedding trip. This reply to your appreciative letter has been so long delayed that you may have thought the written word had by now completely succumbed to the visual image. I am indeed touched by the fact that you feel the Hallmark Gallery exhibit will remain memorable over the years as a part of your wedding trip. You see, that way I know my photographs will have as many happy anniversaries as you and your wife Carolyn! With every good wish for your success as poet and photographer, and my warmest regards to both of you. "In a month we will be married 44 years, and André Kertész was right: his photographs have had many happy anniversaries with Carolyn and me." Rodolfo Canet adds: "Just let me say in public how touching and delightful I found Mr. Kingston's story. Great way to start my day." With only a few offerings and a negligible number of lenses, maybe you didn't think much of Canon as a player in the mirrorless game. But think again—according to Thom Hogan's analysis of BCN data, Canon is coming on like gangbusters in the cameramakers' home market of Japan, capturing 18.5% of the entire mirrorless market in 2016, easily surpassing onetime leader Panasonic and even squeaking past mighty Sony despite Sony's plentiful offerings in a variety of formats. Note that Thom's chart is only for the Top Three and their combined market share, which is why figures for Panasonic don't appear in 2016 and 2015, and why Fuji, Nikon, Leica etc. don't appear on the chart at all. Thom also notes that Japanese camera buyers are "price and size sensitive," greatly preferring small cameras and bargain prices. Thom's other article of great interest is the one reflecting on BCN's recently released data on DSLR market share. It shows Nikon slowly slipping in the home market, and Canon going from strong to stronger. Thom now has many sites covering many aspects of camera gear, but his core or legacy identity is as a Nikon site (the tagline on dslrbodies.com is "Supporting the Nikon F-mount on the Internet since 1994..."), so naturally his article concentrates on the ramifications of the data for Nikon. The article also charts the steady deterioration of Nikon's rank in lens sales, from 23.2% of the interchangeable lens market in 2009 to 12.5% last year. Sigma has now taken over the No. 2 position in interchangeable lens sales in Japan, behind Canon. Not in the top three either. Jim Bullard: "I'm not surprised. The 'M' cameras have taken a bum rap in the U.S. but I like the M3 in particular. I wish it had a fully articulated screen on the back but, on the whole, I love it. Good size, reasonable price, and great optics." Geoff Wittig: "This is ironically bad news for us Canon users. Nikon's technical prowess over the years has provided the competitive challenge keeping Canon from becoming completely complacent. No doubt it's Nikon's brilliant D800/810 series (and their Sony sensors) that forced Canon to finally start addressing its lag in high ISO noise and dynamic range capabilities. If Nikon slips further and goes under (a distinct possibility, since Nikon is basically a camera/lens company, while Canon and Sony have far wider product lines), Canon may get even fatter and happier with what they already have. This would make me very sad; I greatly prefer the larger form factor of 'standard' DSLRs over mirrorless cameras, which just feel too small and fiddly to me." When last we were talking about books, RubyT mentioned that she used to read 300 books a year. That's well into outlier territory, it seems to me. While phenoms, speed readers, invalids or the truly obsessed might log more than that, I would guess 300 books is quite a few more books than most people read. So then posit a 70-year adult reading life, again verging into outlier territory. There are ~300,000 new and revised titles published in the United States and another ~180,000 in the United Kingdom, not to mention ~28,000 in Australia, every year. That's just the three leading English-speaking countries. Worldwide the estimate is 2,200,000 titles annually. Most of the non-English titles never make it to translation. Even if you assume that only 5% of all those titles in English reach any level of worthiness (there's a lot of cynical bookstore fodder, specialty titles, children's books, lowest-common-denominator entertainment, and just plain junk that gets published), that's still more titles published in the three leading English-speaking countries every year than our hypothetical heavy reader will be able to read in a lifetime. Next, add in even just the very best of all the books published in the 542 years since Caxton published Recuyell of the Historyes of Troye, the first book printed in English using moveable metal type, in 1475. How many is that? I have no idea, but it's more than a heap. Further, I have no idea how many foreign-language books are translated into English every year. But if we were to assume that 1% of books published in other languages make it into English, that's another 16,000+ books every year for you to miss most of. Even if you read 300 books a year—I hit about 65 in a good year, and 2016 was not a good reading year for me—you can't begin to survey more than tiny smattering of all the worthy books that exist. The bull-by-the-horns, brute-force solution might seem appealing: Work at it harder! Devote more time! Read more books! But that's like buying more lottery tickets. In terms of consuming what's available to read, it increases your exposure to the set of "all books in English" so infinitesimally that it's hardly worth the effort. If you want to read more because you enjoy it or want to learn more, great, but you're still only dipping a toe into the ocean. I'd like to suggest an alternative: do the exact opposite. Read fewer books, but engage with reading more. Read fewer books, but pick them more carefully and read them more carefully; "read around" them, by reading related books, by the same author or different authors or literary critics; own multiple editions, with different introductions and apparatus; learn about the genre, the tradition, the author's influences and ideas; and so forth. Whatever makes the experience of the book richer and fuller for you. Don't consume more, in other words; consume better. Naturally I'm not saying you should do this with every book you read, or every author, of course. But if you really engage with one or two authors, two or three books a year, they "become yours" in a way books don't tend to do if you just rush through one to get to the next. The ones you engage with act as stand-ins for all those books you'll never read, all those authors you'll never sample, all those experiences you'll never have. If you can't experience every book, at least you can fully enjoy a few of the books you do experience. Well, I started out meaning to relate this idea to experiencing photography and dealing with the unending digital tsunami—everything above this paragraph was just the introduction—but I see this is already too long to be a blog post and I haven't even gotten there yet. Hmm, maybe writing books, "writing long," is not such a good idea for me after all! At any rate, as part of the audience for photography and photographers, we can't possibly see more than a tiny, tiny smattering of all the photographs that are available for us to see. Engagement is the best strategy, I think, to make our experience of the photography we do get to see richer and more satisfying. But how we would go about that is going to have to be a topic for another day. I'm sure you have probably already figured out your favorite ways of engaging with work you like—maybe you could tell me! mike plews: "Lots of books on becoming a better writer but not many on becoming a better reader. This one is great, highly recommended." Ernie Van Veen: "To be fair, most of those 2.2 million would be cookbooks. No, really." rusty: "Engaging with work (photos and photographers) I like comes through the filter of this and a few other worthy blogs: Lenscratch, Don't Take Pictures, LuLa and some others found through the above." Marvin G. Van Drunen: "Does listening to audio books count as reading? My commute from home to office and back takes about 80 minutes per day five days a week, 50 weeks per year. I think that works out to 333.3 hours per year. I started, several years ago, to use that time to listen to books, mostly history and biography. The last three titles I have listened to are: Lyndon Johnson and the American Dream by Doris Kearns Goodwin, The Fall of Berlin, 1945, by Anthony Beever, and The Romanovs by Simon Sebag Montefiore. I'm currently listening to Benjamin Franklin, An American Life by Walter Isaacson. I really have learned a great deal over these years and I think that I am using my commuting time wisely. I actually look forward to the time spent. So, I'm not sure if this counts as reading, but I do love the experience." Mike replies: Not sure if you're asking me, but I'd say heck yeah, that's reading. And it sounds wonderful, too. Sounds like you have a great commute. And by the way, a bookseller I respected once told me that the Walter Isaacson book you're listening to now was the best book he'd ever read. Tom Hassler: "'Don't consume more, consume better'—excellent words to live by in all facets of life!" Dennis: "Life is too short to drink bad wine. Actually, I gave up on the idea of being anything remotely like a wine connoisseur. I read a bunch about wines, started learning about regions, visited specialty shops, but the vast amount of wine out there coupled with the fact that I really don't drink much wine (I just enjoy it when I do drink it) made me realize I'm never going to try enough of it to develop a sophisticated appreciation. I know of people who discover they really like a particular wine enough to buy a couple cases of it. I might go through a case or two a year (and that's sharing with company). I joined a wine club, but quit after three months when I'd stockpiled enough for the next year. So I ask for recommendations. I can tell someone at the shop what I'm looking for and get a pretty good bottle. "With photography, it's less a pull method than a push method—I'm not looking for recommendations, but there are people providing them. And it's quick and cheap to sample someone's work before committing to buying a photo book." Jaap: "The book 'problem' multiplies when you are multilingual." RubyT: "After I got over being startled to see my name in the first sentence, I realized this was the perfect opening to thank you for recommending Blue Highways, which I am now reading. I bought through your links and the seller never even mentioned the book had been signed by the author. I have a special box for inscribed books—important things are in the basement because we get windstorms here. It's one of many books you have recommended that I have enjoyed. I have a 'fast processor' (I'm sure this is not what they would have called it when I was a child), something I did not know existed until my children were in school and were tested. Two of them have it, two of them do not. It's a tremendous advantage in terms of being able to read quickly and still comprehend well. Perhaps less of an advantage in that it leads to poor study habits if you can finish all the homework in school and spend all your free time reading. I feel lucky to have been an introverted child whose parents wouldn't pay for cable TV. Back then I had so much free time, and I spent most of it reading. I also agree with your point today. Now that I have responsibilities in life I am much more selective. I used to obsessively finish every book, even if I didn't like it. I won't do that now, I don't want to sacrifice time for something that doesn't fully capture me." David Dyer-Bennet: "I certainly read a lot, but it's mostly re-reads. When I have the energy to engage with something new, I usually end up using it to make something (well, either that, or go lie down until the feeling passes). I logged a bit under 100 new books for 2016—but that doesn't include re-reads a lot of the time, so the total number is five or six times that. At any given moment I have a book or two going on paper and a book going on my phone (my primary electronic reading device)." Paul: "The flip side of too many books appearing in print is the problem of really good books being out of print or being simply hard to find. There are a few authors whose books I relish and I feel fortunate in having found them early enough to get copies of everything they wrote, but I think it's a shame that others either can't find them (certainly not in bookstores) or just aren't aware of them. I've long wished that I could find a resource where people share their favorite authors, so that we could identify others with overlapping tastes, and then sample their other favorites. That might help address the frustration I have that there are books I'd love to read but just don't know about them." Mike replies: There's always NYRB Classics. That's tip o' the iceberg, of course, and doesn't contradict what you say. My browser crashed and failed, of all things. I didn't even know that was a thing that could happen. When I got it back again, all my bookmarks were gone. Firefox is pristine, clean and virginal again. I knew I had a lot of bookmarks (maybe into four figures), but I had no idea how many sites I had set up the way I like them, or how many passwords Firefox was keeping sorted for me. Lots and lots. Turns out I don't know how to reset many of the settings back to the way they were. And no idea to what degree "muscle memory" and habit had taken over as far as where things were and how to get to what. A great number of website passwords were stored within Firefox too, not in my computer. And of course many of them were passwords I didn't have memorized and don't have any record of. Hmm. Anyway, I'm surprised at how much it slows me down. I store a lot of information for future posts using bookmarks, for example. ...And people wanted me to switch from Mac to PC. Ha! I can't even switch browsers. Firefox had been wonky for me for months. Tabs kept crashing, pages were slow to load or loaded incompletely. I was using the latest version and kept the caches clear, but clearly something was wrong. Now I feel kinda foolish, like you feel when your car is acting up and you ignore it and eventually get stranded. Meanwhile, a book report: I'm making good progress on my books. Plural: I'm writing both the story of my son's birth, and a book for photographers. The former is slow going, with many false starts and wrong turnings. And psychologically heavy, because I'm obliged to revisit old times and discover memories lurking that I didn't know were still there. But I'm happy about the way both of them are going and I just love working on them. I really love the work. I'm rather lazy by nature, and I didn't want to write a book for photographers that...ages too fast. Because I don't want to have to rewrite it every three years. I mean, you wouldn't be interested in reading a book now that was written around 2007-era technology, would you? (What, you don't need to read the latest on uprezzing 8-MP files and fastidious comparisons of shadow noise between the latest 2007 cameras?) I wanted something that would stay fresh and still be useful after ten years, possibly, not just three. So I tried outlines of several ideas or strategies, conceptualizing hard about the best way to help all sorts of people, young and old, new to photography or veterans, serious or casual. After a longish struggle trying to solve the problem I think I lit on an organizing idea that's robust and flexible and gives me great leeway for engaged discussion. I'm very happy with the concept, and since I arrived at it the writing seems to be cruising along quite nicely at a steady airspeed. Which is a good sign. "So far so good" as the old saying goes. The photography book is all about seeing, self-direction, aesthetic tastes and and working methods, though...not gear and not technique. Anyway I just thought some of you might be interested in how these projects are proceeding. They're what I do on the weekends now. Minimal use of bookmarks needed. [UPDATE Tuesday morning: All is well. I "refreshed" Firefox several days ago, in an effort to help improve its lagging performance. At that time, Terminal appeared and I was asked to choose between two profiles: "Mike's Firefox" and "Default." Naturally I chose "Mike's Firefox." A couple of days later, when I next restarted the computer (I usually restart every day, but not always), that choice kicked in...only it turns out that "Mike's Firefox" was what Firefox named the clean-sweep, nothing-added re-set profile, and "Default" was what it named, well, my profile. So all I had to do to fix it was select "Default" as my profile and all my information (except extensions and add-ons, which are deleted during a refresh) reappeared. Oh, and I did one other thing...I renamed "Mike's Firefox" to "Not Mike's Firefox!" So I don't get confused again in the future. In any event, you might want to look into your own browser housekeeping and hygiene—find out where your bookmarks are backed up and how to restore them. If others can learn from my little misadventure and be saved a bit of hassle in the future, then it might even have been worth it. Thanks to everyone who offered help and advice! Birds have been Xavi Bou's great passion beginning with long nature walks he took as a child with his grandfather. His project "Ornitographies," in which "art and science walk hand in hand," map the traces of birds in motion across the sky; think of them as the far opposite of photographs that freeze motion with high shutter speeds or flash. It's surprising how varied and beautiful the pictures are. Xavi, who graduated from the Grisart International Photography School in 2003, works in the fashion and advertising industry in Barcelona, Spain. The prints from our last print sale have begun to ship. If one is coming to you I think you're in for a treat, and I'm eager to hear what you think. Although my guess is that this is a picture you'll like better after you've lived with it for a while than you will when you first see it. If it occurs to you, in the future, let me know if I'm right about that. John even said he likes it better after printing it for a week. Thanks again to John Lehet and everyone who participated.
2019-04-19T18:53:54Z
https://theonlinephotographer.typepad.com/the_online_photographer/2017/01/index.html
Among the defendants at the Nuremberg War Crimes Trial were German doctors who before the Holocaust had euthanized mentally defective, purebred Germans. Dr. Leo Alexander, who worked for the chief counsel for war crimes, had interviewed those physicians before the trial. In a prophetic article in the July 14, 1949 issue of the New England Journal of Medicine, Alexander examined the initial causes of the Holocaust. The beginnings, he wrote, were merely a subtle shift in emphasis in the basic attitude of the physicians. It started with the acceptance, basic in the euthanasia movement, that there is such a thing as "life not worthy to be lived." The Nazis described the patients they killed as "useless eaters." Not long before Alexander's death in 1984, he warned that the same lethal attitudes were taking root in this country. He cited the rise of the death with dignity movement, which advocated what later became more widely known as assisted suicide doctors providing the means for patients to kill themselves, which is now legal in Oregon. Recalling his research for the Nuremberg trials, Alexander said of what was happening here: "The barriers against killing are coming down." A new book by Wesley Smith, "The Culture of Death: The Assault on Medical Ethics in America" (Encounter Books, 2001), documents Alexander's concerns more fully and lucidly than any volume yet published on whether humanity will be able to remain humane. Writing about "The Culture of Death," Dr. N. Gregory Hamilton, president of Physicians for Compassionate Care, points out that prominent bioethicists now claim "the value of each human life can be traded-off in complex cost-benefit ratios. Members of the bioethics elite have quietly convinced many of our judges, hospital administrators and doctors that some human lives have relatively less value, and therefore less right to equal protection." I have known and read Mr. Smith for a long time, and I have often cited him in this column because of the range, depth and accuracy of his research. His new book names a number of these bioethicists whom I called, years ago, the new priesthood of death. He shows how their influence began and grew, and tells of patients who have been subject to final decisions by doctors often against the patients' wishes and the wishes of their relatives because it was thought that their lives were no longer worth living. It's called involuntary euthanasia. As Mr. Smith says in his book: "With the exception of assisted suicide due mostly to the widespread media coverage of Jack Kevorkian most people are but dimly aware of what is happening." Popular culture, he adds, promotes many of these practices as a compassionate response to the trials and tribulations of illness. Like Alexander in 1949, Mr. Smith is trying to alert all of us to the falling barriers against killing. Moreover, he warns that a consequence of this devaluing of disabled and otherwise fragile lives is the creation of a duty to die. "I have debated academics who seriously believe that people who are no longer productive should die rather than expect their families and the rest of society to pay what it costs to keep them alive." In the Cambridge Quarterly of Health Care Ethics last fall, there was this medical advice by Drs. Lawrence J. Schneiderman and Alexander Morgan Capron: "A judge who orders that a severely disabled child be kept alive rarely sees firsthand the long-term consequences of that decision, which remain a continuing vivid experience for the health professionals who must provide care for the child." Therefore, so that these professionals can be relieved of such a vivid experience, a compassionate judge should order that the child not be kept alive. That is the culture of death. Mr. Smith ends "The Culture of Death" with the following words: "We all age. We fall ill. We grow weak. We become disabled. A day comes when our need to receive from our fellows adds to far more than our ability to give in return. When we reach that stage of life . . . will we still be deemed persons entitled to equal protection under the law?" If only in self-defense, you ought to read "The Culture of Death" and discuss it with your doctors and your family. And put your wishes in writing. Maine voters' rejection on November 7 of an initiative to legalize physician- assisted suicide was only the latest in a string of defeats for the American euthanasia movement. Granted, the margin was narrow -- 51.5 percent to 48.5 percent. And with the Netherlands finally in the process of formally legalizing assisted suicide, no one should infer that this tenacious international movement is dead. Still, its advocates in this country have failed to move the ball since 1994, when Oregon voters passed a legalization initiative. The latest setback should spur the media to give less coverage to killing as "medical treatment" and more to the underreported subject of truly compassionate assistance to the dying, such as pain control, symptom management, and hospice care. If the assisted-suicide movement was rebuffed in Maine, it was not for lack of investment in the campaign. Euthanasia activists from around the nation had carefully selected Maine as the most promising site for a breakthrough. "Maine is a small state with a small media market, and proponents believed that they could carefully control the message," explains Rita Marker, executive director of the International Anti-Euthanasia Task Force. "More importantly, some of the most vulnerable groups who oppose the assisted- suicide agenda nationally -- disability rights activists, minorities, advocates for the poor -- are not as numerous in Maine as they are elsewhere in the country, and thus assisted-suicide activists had substantial reasons to be optimistic about their chances of prevailing." The practice in Maine of allowing an initiative's proponents to determine the wording that appears on the ballot also favored the measure. Euthanasia activists couched Question 1 in soothing language: "Should a terminally ill adult who is of sound mind be allowed to ask for and receive a doctor's help to die?" And to mobilize support for it, they mounted a national full-court press. Euthanasia organizations from all over the country urged their members to donate time and money to the campaign, with much success. More than 90 percent of the financing for the "Yes on Question 1" campaign came from outside Maine. Many of the nation's best-known assisted-suicide proponents -- including Oregon governor John Kitzhaber -- strove to persuade Maine voters to make it legal for doctors to write lethal prescriptions. Initially, public support for the measure was high -- 70 percent, according to the Bangor Daily News of February 17, 2000. But as the campaign progressed and voters considered assisted suicide in the context of HMO cost-cutting, the potential for abuse and coercion, and the problems reported in Oregon despite the secrecy surrounding the practice there, public support steadily waned. When the final tally was made, the initiative lost by almost 20,000 votes. The same pattern of early support for assisted-suicide initiatives, dwindling to eventual defeat at the polls, occurred in Washington state (1991), California (1992), and Michigan (1998). Even in Oregon, where the initiative passed, support shrank from nearly 70 percent at the beginning of the campaign to just 51 percent in the final tally. The euthanasia movement, moreover, has also been stymied in the courts and legislatures. In 1997, its advocates failed to persuade the U.S. Supreme Court to issue an assisted-suicide Roe v. Wade. The vote in Washington v. Glucksberg was unanimous, a rare achievement for our often divided high court. Only a few months later, the Florida Supreme Court refused to rule that assisted suicide was a right under the privacy guarantee in the Florida Constitution. And in 1999, a court in Michigan sentenced euthanasia's most notorious practitioner, Jack "Dr. Death" Kevorkian, to 10 to 25 years in prison for the murder of Thomas Youk. Thus ended a macabre career that had helped eliminate some 130 people and made kidneys removed from one disabled victim available to the public on a "first come, first served" basis. Kevorkian had outworn the patience of law enforcement by arrogantly providing a videotape of his crime for airing on the program 60 Minutes. As for the legislative arena, not one of the many euthanasia bills introduced at the state level has had a realistic chance of passage. A robust coalition came together to fight these bills. In addition to the constituencies mentioned by Rita Marker, this alliance includes hospice professionals, religious organizations, pro-lifers, and medical associations, all of them willing to set aside their differences on other controversial issues in order to unite against the proposition that doctors should have license to kill their patients. The only prospect euthanasia advocates have for gains in the immediate future is in Alaska, where a lawyer for the misnamed Compassion in Dying Federation has sued under the privacy guarantee of the state constitution to overturn the state ban on assisted suicide. The suit failed in the trial court and was recently argued before the Alaska Supreme Court, where the justices noted the Florida high court's refusal to legislate from the bench. The Alaska decision is expected next year. Whatever happens in Alaska, assisted suicide won't soon be widely legalized in the United States. Thus, the time has come to look beyond a movement that actively harms the dying and disabled people it purports to help. Not only does it disparage the value of their lives, but it diverts media and popular attention from all that medicine can do to make people's dying days worth living. It is high time that the issue of end-of-life care be given serious and concentrated consideration. For example, it is a national scandal that only 29 percent of Americans who died in 1999 received hospice services, and those who did often did so for only weeks or days. By contrast, in England the figure is 65 percent, and most hospice patients receive care for many months. For 30 years, the British have been educating the public about care for the dying, making hospice a household word. Nor do they place policy impediments between dying patients and hospice care -- as we do in the United States, where patients are required to refuse all further curative treatment in order to receive hospice relief. According to Dame Cecily Saunders, the creator of the modern hospice movement, this irrational American rule makes patients, families, and physicians far less likely to turn to a hospice, which is seen as the end of all hope. In an era when the media are addicted to scandal, assisted suicide makes for juicier copy than hospice care and pain control. But the stalling of the euthanasia movement can and should change that. The big story should be the challenge of creating a medical environment in which no American dies alone or in pain. Brussels, Belgium -- Belgian lawmakers have agreed on a draft law to legalize euthanasia in certain cases, subject to approval by parliament later this year. Tuesday's vote was split 17 to 12 with one abstention. The law, under consideration for about a year, has been subject to public hearings, beginning last May, and to considerable legal wrangling, with the Christian Democrats staunchly opposed to legalisation and the Socialist-Liberal-Green coalition advocating it. If passed, it would make Belgium the second country after the Netherlands to vote to legalize euthanasia. The Belgian proposal is similar to the Dutch legislation. Senators from two parliamentary working groups agreed to the final text of a draft law to legalise two types of requests for euthanasia -- by terminally ill patients and by patients with incurable diseases who may have years to live but are in extreme pain. The draft legislation is expected to be presented to the upper house of parliament within the next month, said Senator Frans Lozea, who took part in the debate. "The lower house will then vote quite quickly afterwards," Lozea told Reuters on Wednesday. The government has given politicians a free vote on this issue, meaning they are not bound by their party's position. Under the proposed legislation, requests for euthanasia must be made by a patient who is conscious when making an active, voluntary demand. The request must also be persistently repeated. Washington, DC -- A study published October 3 in the Annals of Internal Medicine found that support for assisting suicide and euthanasia among oncologists (physicians specializing in cancer treatment) declined by more than half between 1994 and 1998, a drop the study authors attributed primarily to "expanding knowledge about how to facilitate a 'good death,' making euthanasia and physician-assisted suicide no longer seem necessary or desirable." Oncologists, who reported that they could get their dying patients all necessary care were over four times less likely to have performed euthanasia, compared to those who reported that administrative, fiscal, and other barriers allowed them to provide only some of the care needed by their dying patients. Those who reported having sufficient time to talk to dying patients and those who believed they had received adequate training in end-of-life care were less likely to have performed euthanasia or physician-assisted suicide. The study authors, Dr. Ezekial Emanuel and seven others, wrote that the date "end some support to [the] concern [that] inadequate access to palliative care might make euthanasia and physician-assisted suicide attractive alternatives." The study, with 3,299 participants, founds 22.5% of oncologists supported physician-assisted suicide (PAS), compared with 45.5% in 1994. The shift was even more dramatic with regard to euthanasia (here understood to mean the doctor killing the patient, as by lethal injection, instead of providing the patient the means to commit suicide, as in PAS). Only 6.5% supported it, compared to 22.7% in 1994. Of doctors who had actually performed PAS (10.8%), 18% had done so five or more times. Additionally, 3.7% had performed euthanasia, 12% of whom had done so five or more times. "The significant decline in cancer specialists who support euthanasia demonstrates that the answer to pro-euthanasia activism is not to legalize killing but to redouble efforts to improve care," said Burke J. Balch, J.D., director of NRLC's Department of Medical Ethics. "You don't solve problems by getting rid of the people who have them." Maine voters will decide November 7 whether to join Oregon in legalizing assisting suicide. A bill now before the U.S. Senate, the Pain Relief Promotion Act (passed by the House in 1999) would end the use of federally controlled drugs to assist suicide, while implementing programs to improve pain relief as a positive alternative. Something terrible happened in The Netherlands last week that will have profound consequences around the world. Not since Germany occupied Holland six decades ago has there been an official policy declaring some people unfit to live and worthy of forced extermination. But last week, the Dutch parliament gave final approval to a new euthanasia law that will allow doctors to end a life when it is subjectively decided that the life is no longer worth living. The usual assurances have been given by government and the medical establishment to pacify the masses, 10,000 of which demonstrated against the measure outside the Parliament building in the Hague the day of the vote. Two doctors will have to validate that the patient is terminally ill, that he or she suffers "unbearably" and wants to die. But the principle of an unalienable right to life, which began to fall when abortion was legalized, has crashed to the pavement with the Dutch government's validation of euthanasia. the only principal protecting human life has been compromised? Who is to say "no" when voters, opinion polls and "experts" say "yes"? A "right to die" is quite different from an individual's right to refuse treatment. The common law has long recognized the latter. But when a right is elevated to the level of a constitutional principle, then the courts are empowered to decide whether to expand or contract that right, as they did in America beginning with the so-called "right to privacy," which is what lead to the elimination of all abortion restrictions. People who lightly esteem human life and think of us as having been produced by an evolutionary process stemming from random chance are the most likely to embrace euthanasia. nonpersons (of) the elderly." This, they note, "will become increasingly so as the proportion of the old and weak in relation to the young and strong becomes abnormally large, due to the growing antifamily sentiment, the abortion rate, and medicine's contribution to the lengthening of the normal life span. The imbalance will cause many of the young to perceive the old as a cramping nuisance in the hedonistic lifestyle they claim as their right. As the demand for affluence continues and the economic crunch gets greater, the amount of compassion that the legislature and the courts will have for the old does not seem likely to be significant considering the precedent of the nonprotection given to the unborn and newborn." Kuyper": "Queen Wilhelmina opened Parliament with the annual Speech from the Throne, written by the Prime Minister...Emphasizing the spiritual interests of the nation, the Queen declared that cabinet policy would be based on the Christian foundations of society. The ethical character of public life would have to be more carefully protected by law." The moral descendants of Kuyper demonstrated outside Parliament last week in favor of that view, while inside legislators marked the centennial of Kuyper's inaugural with a vote that was the antithesis of everything he believed. How quickly a society can slide toward perdition with no compass and no objective standard of right and wrong. First, the Nazis dehumanized the Dutch Jews. Now the Dutch - Jews and Gentiles (with others soon to follow) - are dehumanizing themselves. Washington, DC -- Doctors whose terminally ill patients ask for help in ending their lives are often forced by an ``unspoken code of silence'' to decide on the request alone, without the advice of fellow physicians, researchers said last week. In a study published in the Archives of Internal Medicine, researchers interviewed doctors in Seattle and San Francisco who had received at least one request from a terminally ill patient for help in committing suicide. Half of the doctors had helped a patient to die, while the other half had not -- suggesting that law against assisted suicide are being broken in the 49 states which make the practice illegal. Although assisted suicide is against the law in every U.S. state but Oregon, doctors who care for terminally ill patients regularly hear suicide requests from patients, the researchers said. But there has been little documentation of how these requests are handled, they said. The study was based on interviews with 20 doctors. A heavy emotional burden accompanied the isolation experienced by the doctors, Kohlwes said. A few said they were worried about becoming known publicly as the ``local Kevorkian,'' Kohlwes said, referring to Jack Kevorkian , an assisted-suicide crusader convicted in 1999 of second-degree murder in Michigan. The Hague, Netherlands -- Despite protests outside parliament, the Netherlands legalized euthanasia and assisted suicide Tuesday, becoming the first nation to allow for both practices. About 10,000 euthanasia opponents surrounded the building, praying, singing hymns and quoting the Bible, while the upper house of parliament, the Senate, considered the legislation. The Senate voted 46-28 in favor of the law, likely to take effect in the summer. Before the vote, Health Minister Els Borst reassured legislators the bill could not be abused by doctors because of careful supervisory provisions. The law presupposes a long doctor-patient relationship and requires patients be legal residents of the Netherlands. "There are sufficient measures to eliminate those concerns,'' Borst told the senators. Assisted suicide, she said, will remain a last resort for those who have no other choice but endless suffering. The law formalizes a practice discreetly used in Dutch hospitals and homes for decades, turning guidelines adopted by Parliament in 1993 into legally binding requirements. Doctors can still be punished if they fail to meet the law's strict codes. Outside parliament, some protesters were masked in black balaclavas and carried oversized syringes dripping with fake blood. Others gathered signatures for a petition that already had 25,000 names before the debate opened Monday evening. Several Christian schools canceled classes to allow students from across the country to participate in the demonstrations. After the vote, they said they were disappointed but not surprised. The Senate vote was considered a formality for the bill, already passed by the lower house. Despite the strong showing of opponents on Tuesday, van der Hoek, who belongs to the Dutch Reformed Church, admitted he is one of a small minority in the Netherlands, once a stronghold of Christian politics. In the debate, Borst said a broad consensus had coalesced after 30 years of discussion, claiming some 90 percent of the population backing the changes. Under the law, a patient would have to be undergoing irremediable and unbearable suffering, be aware of all other medical options and have sought a second professional opinion. The request would have to be made voluntarily, persistently and independently while the patient is of sound mind. Doctors are not supposed to suggest it as an option. The new law also would allow patients to leave a written request for euthanasia, giving doctors the right to use their own discretion when patients become too physically or mentally ill to decide for themselves. An independent commission would review cases to ensure the guidelines were followed. If a doctor is suspected of wrongdoing, the case will be referred to public prosecutors for review and possible punishment. Several countries - Switzerland, Colombia and Belgium - tolerate euthanasia. Belgium is the only other country currently considering making assisted suicide legal. In the United States, Oregon has allowed assisted suicide since 1996, but its law is more restrictive than the Dutch bill. In Australia, the Northern Territories enacted a law in 1996, but it was revoked in 1997 by the federal parliament. Early reaction from abroad, however, was negative. Russian Health Minister, Yuri Shevchenko, interviewed by RTR state television, said the law would be wide open to abuse. "Imagine an ill, old man induced to die with his belongings and small apartment taken from him. This is a great sin and we must not allow it," he said. The Illinois-based "Not Dead Yet," organisation, a U.S. disability rights group, also condemned the action. "The Dutch experience with euthanasia is best described as one of increasing carelessness and callousness over the years," it said in a statement. An influential Roman Catholic bishop in Poland also spoke against the new law. "Euthanasia allowed in one sphere.., can slip out of control and embrace other groups of people -- those unwanted and disabled," said Bishop Tadeusz Pieronek, former secretary general of Poland's episcopate. In contrast, Australian anti-euthanasia campaigners do not expect any "ripple effect" from the Netherlands becoming the first country in the world to legalise euthanasia. "I can't understand the Dutch, I really can't," said Right to Life chairwoman Margaret Tighe, pointing to the rejection of euthanasia elsewhere in Europe, Australia and the United States. "I believe that when the history books are written in years to come, people will look back in sorrow and in anger at what the Dutch have allowed to happen because (voluntary euthanasia) is a very, very slippery slope," Tighe told Reuters. The drafters of the Dutch bill denounced a plan from Australia's leading euthanasia campaigner to set up a floating clinic in a ship flying the Dutch flag off the coast. Philip Nitschke had said if the Dutch legalize euthanasia he would offer clients lethal injections in international waters off the Australian coast. Borst said the Dutch government would do "whatever it could" to counter any such effort and stressed that the scheme "could by no means" fit into the Dutch rules. The Hague, Netherlands -- Tuesday's development prompted pro-life campaigner Dr. Bert Dorenbos to say he felt "ashamed" at what is being done in his country's name. Speaking by telephone from the square in front of the upper house of parliament, Dorenbos said he was convinced most Dutchmen and women were opposed to what was being perpetrated by "a small group of hardliners." "We are trying to excel in evil. We see the result. It doesn't bring happiness, it brings problems and death," Dorenbos said. "But I'm convinced this is not the will of the majority of Holland. We're working to revive the good spirits of the Dutch people, and there are many [right-minded people] in this country." On Tuesday afternoon, the protestors outside parliament were joined by thousands more arriving from around the country, brought in by schools, youth groups and women's organizations, for a silent protest, he said. Dorenbos is president of a group called Cry for Life, which Monday night handed lawmakers a petition bearing 40,000 signatures, appealing to them to defeat the Bill. But he acknowledged that the chamber was weighted in favor of euthanasia. A simple majority of the 75 Senators is required to pass the law, which the lower house passed by 104 votes to 40 last November. The three parties comprising the ruling coalition, Labor, VVD and D66, hold 38 seats between them, while the Greens, with eight seats, also support the Bill. It is opposed by the Christian Democrats, the Socialists and smaller Calvinist parties. reversal in the Netherlands too. "Things will change. [Euthanasia] is an offense against human rights. I believe that soon the whole pro-death mentality will be turned upside down." Dorenbos said some Dutch euthanasia campaigners wanted to push things further, making it even easier for a person to demand suicide than the new law's restrictions will allow. "[Some] pro-euthanasia people say that every person has the right to kill himself at any point, not when he is terminally-ill, just fed up with life." This law, he said, was merely another step towards an even more dangerous situation, and so the battle would continue. Dorenbos said he suspected the slide would be halted by doctors who eventually dig in their heels, saying that they were being forced to take lives at their patients' demand, rather than trying to save them. "Doctors should be the last people to kill," he argued. If the government legalized euthanasia, it should also appoint official killers. "It's a horrible thought, but I'm just following their mindset." Earlier this year, Dutch pro-life activists were hoping that a murder conviction of a doctor who ended a terminally-ill patient's life prematurely may have helped to swing legislators' opinion against the law being voted on today. Wilfred van Oijen was convicted of murder for killing an 84-year-old woman in 1997. Although doctors have up to now been allowed to hasten patients' deaths under prescribed circumstances, he failed to get the woman's go-ahead. Neither did he get another doctor's opinion, as stipulated. Among those waiting for the Netherlands to legalize euthanasia was an Australian doctor who wants to acquire a Dutch-registered ship, then anchor in international waters off his home country and offer euthanasia while circumventing Australian law. Philip Nitschke, who killed four patients in the Northern Territory before the euthanasia law was abolished, did not respond this week to emailed queries about his controversial proposal. But he was quoted earlier as saying he knew of many people who would take up his services if his floating euthanasia clinic was operating. Australian pro-lifers have called the plan "bizarre." Dorenbos said Tuesday his group had approached lawmakers about stipulating that the new law should not enable people like Nitschke to exploit Dutch legislation in this way. "The fact that this man is doing it is proving that what we are doing is evil," he said. "That he's choosing a Dutch ship is proof that these people like to work in the dark." essential factor for requesting euthanasia in the Dutch legislation is that a person be experiencing unbearable suffering." expertise necessary for excellence in patient care, rather than protecting their patients." says Dr. deVeber. recognized as a type of unbearable suffering, even though it is normally treatable. This raises serious questions about the laws ability to protect people who are emotionally vulnerable." The law, which allows for an incompetent patient to be euthanised, "fails to protect vulnerable citizens in the Netherlands and calls into question the integrity of the Dutch euthanasia model," says Alex Schadenberg, Executive Director of the Coalition. based on the Dutch model. Washington, DC -- The 14,000-member Christian Medical Association (CMA) today lamented the Dutch Parliament's vote to legalize euthanasia, saying the policy will further corrupt the medical profession and lead to more involuntary deaths. CMA Executive Director David Stevens, MD noted, "When the Dutch government has sanctioned euthanasia in the past, statistics have shown that over three out of four Dutch patients who died through medical intervention never even gave their consent. Decriminalizing this tragic practice will now open the door to even more involuntary deaths and destroy trust--the foundation of the doctor-patient relationship." Stevens added, "This is not a question of who will have the right to die; it's a question of who will have the power to kill. What's being promoted as 'freedom of choice' flies in the face of the Dutch government's own Remmelink Report, which documents thousands of cases of patients put to death without their consent." The Christian Medical Association conducted intensive on-site research in the Netherlands, interviewing experts as well as family members who have suffered the consequences of involuntary euthanasia. CMA also produced a video and provided testimony for Congress about the dangers of the Dutch experiment with officially sanctioned suicide. Stevens said, "We have looked beyond the official statistics and into the faces of real people who have suffered under this tragic policy of turning doctors into killers. It all sounds so libertarian until you realize that the real autonomy lies not with the patient, but with the doctor. And once the deed is done, the chief witness to the crime is dead." The Hague, Netherlands -- Yesterday, April 9, the Upper House of the Dutch Parliament started the final debate on the Euthanasia Law, which will make it legal to kill patients with their consent under certain conditions. The Lower House of Parliament already agreed on the new law at the end of last year. Cry for Life presented a second batch of 15.000 signatures to members of Parliament before the debate. Earlier, 25.000 signatures had already been given. During the first round of the debate on Monday evening, different parties criticised the procedure through which doctors have to notify the authorities of the committed euthanasia. The Christian Party Christenunie voiced repeated concern about the fact that there is a duty to report euthanasia after it has been committed instead of before. The main opposition party, the Christian Democrat CDA, questioned the willingness of doctors to report euthanasia, even now the reporting committees are already operating separately from the office of the Public Prosecutor. In the present --already liberal law-- in working during the last two years, the committees (in which doctors have a large say) practically judged over the reported euthanasia. Only in 'questionable cases' the Prosecutor received the advice to prosecute. In 1999 (latest data) only 2216 cases of euthanasia were reported to the committees. Of these cases only a very small number was questionable. The problem with the way of voluntary reporting is that the numbers 'below the line' are not clear. What the total number of cases of practised euthanasia is, is not known. During all recent research, it became clear that 60% of all cases are NOT reported. By taking euthanasia from the Penal Code, the government hopes to place doctors in 'a judicial safer environment' so that their willingness to report will increase. However, the present law, which relaxed the rules already, did not have that effect. Even the speaker from the ruling coalition party, the social democratic PVDA (a supporter of the new law) concluded during the debate that the number of reports has not increased, but decreased by 20% in two years. The party stressed the need to control what is happening and criticised the government of not taking enough action on this point. Even the documents of the reported cases are, contrary to habit, destroyed and not kept for statistic or scientific research. The conclusion of the government that the lower willingness to report was caused by 'start up problems' of the new procedure, was questioned by the social democrats as well. The debate continues this morning. We hope to keep you informed during this day and tomorrow. This afternoon at 15.00 hrs (local time), there will be a large 'quiet protest', organised by numerous churches and organisations to protest against the new law. have consulted at least one other, independent physician, who must have seen the patient and given a written opinion on the due care criteria referred to in a. to d. above. Doctor must terminate the patient's life or provided assistance with suicide ``with due medical care and attention.'' The doctor must notify the municipal coroner after performing euthanasia. Age plays no role. Being simply "tired of life" is not covered by the bill. Here is a chronology of key events leading to Tuesday's vote by the Dutch upper house of parliament to legalise euthanasia. The Netherlands now becomes the only country to make assisted suicide and euthanasia legal, after tolerating the practice for more than two decades. Dutch court outlines conditions which can override doctors' vow to prolong life. It imposes a one-week suspended sentence and one year of probation on a doctor who injected her mother with a lethal dose of morphine. Dutch Supreme Court overturns conviction of doctor who terminated the life of an aged woman who revealed in her will that she had requested euthanasia. The court ruled that the doctor had properly resolved a conflict between preserving a patient's life and alleviating suffering. Dutch parliament passes law to regulate mercy killing with a 28-point checklist for doctors to follow in euthanasia cases. Doctors should find patients are terminally ill, in unbearable pain and have repeatedly asked to die. Euthanasia remains a criminal offence carrying a maximum 12-year jail sentence, but doctors who follow guidelines told they should not expect to be punished. Public prosecutor to decide on case-by-case basis whether to prosecute. Dutch Supreme Court upholds a conviction, but declines to impose a penalty, for a doctor who helped a woman commit suicide at her request. The woman was not terminally ill, but had a long history of depression. The court ruled that the doctor should have consulted an independent medical expert before acting. Dutch court rules in two cases that doctors who ended the lives of two severely handicapped babies at the request of their parents were justified. The doctors should not be punished even though a charge of murder was formally proven. The doctors were the first to be prosecuted for ending the lives of patients unable to express their own will. Government unveils euthanasia reform after official inquiry reveals about 60 percent of mercy killings go unreported by doctors who fear prosecution. Under new measures, reported euthanasia cases are no longer automatically referred to prosecutors, but to an independent panel of medical, legal and ethical experts. Government delivers bill to parliament to legalise euthanasia. Lower house votes to legalise euthanasia under strict conditions. An Amsterdam doctor is convicted of murder, but given no prison sentence, after a court ruled he failed to follow euthanasia principles. Upper house of parliament, the Senate, votes 46 to 28 in favour of legalising euthanasia under strict conditions. London, England -- A recent decision by Dutch lawmakers to legalize euthanasia continues to generate shockwaves in Holland and around the world, as the implications sink in. "It is an important moment in western history, which many people don't seem to realize the significance of," said Henk Reitsema, a Dutch pastor, commenting on the parliament's passage of the law. "I am sure that there will come a time in our lifetimes when many of us look back and wonder 'What were we thinking when we let people decide that it was okay to actively take part in killing people with our medical and legal apparatus involved, while the individuals had committed no crimes?'" he said. Although it has been technically illegal until now, euthanasia by lethal injection has been practiced in the Netherlands for about 25 years and more than 3,000 people die this way every year. The new law, whose passage through the Senate next year is considered a formality, provides guidelines doctors must follow to remain within the law. A patient suffering from unbearable pain must make a voluntary, well-considered and lasting request to die. He or she must also be aware of all other medical options and have sought a second professional opinion. The doctor must send a report to a legal and medical commission that will ensure all conditions have been met. But Karel Gunning, a Dutch physician who heads a group called the World Federation of Doctors Who Respect Human Life, said doctors were unlikely to incriminate themselves when submitting their report after killing a patient. "That report is sent to a committee that must judge whether the doctor acted correctly, and on the basis of this report the committee must judge," he said. "But the author of the report is the doctor himself. Can we be sure that the report is truthful?" A doctor would not mention in the report if the patient had been killed against his will, and it would be difficult for the commission to prove that the report was false. The "chief witness" - the patient - would be dead, Gunning said. If there were relatives or heirs on the scene, they may be interested in an expected inheritance. Gunning said he deeply regretted that Holland was leading this world in this way. He recalled that, half a century ago, Dutch doctors risked their lives by refusing to participate in Hitler's forced euthanasia program, which "killed over a hundred thousand German patients with a mental handicap." But he expressed optimism that the world would not follow suit. "I don't think the world will follow the Dutch guide. I think the Dutch example will show too clearly that it is impossible to allow killing patients who want to be killed, without taking away the protection of patients who don't want to be killed. "That is too high a price for the 'luxury' of being able to choose euthanasia." The Vatican last week published a document which called the Dutch decision a consequence of a wider "spiritual and moral weakening." It challenged the argument that patients had to be put out of their suffering, saying that now, more than ever, "pain is 'curable,' with adequate analgesic means and palliative care [and] adequate human and spiritual assistance." Treatment should only be stopped, said the document drawn up by the Pontifical Academy for Life, in the extreme case of imminent and inevitable death. There was a substantial difference, it argued, between procuring death through euthanasia and allowing it. "The first position rejects life, while the second accepts its natural fulfillment." Earlier, the head of the Roman Catholic Church in the Netherlands, Cardinal Adrian Simonis, said he remained hopeful that the Dutch upper house of parliament may reject the legislation next year. In an interview with an Italian newspaper, he decried "the modern sickness of man who no longer adheres to truth but to the subjectivity of his feelings." Simonis noted that European Union institutions had pointed out that the Dutch law is in conflict with European human rights legislation. Article Two of the European Convention on Human Rights upholds the right to life, as protected by law. Even as the Dutch lawmakers were voting on the bill on September 28, Jonathan Imbody of the Christian Medical Association in the U.S. was delivering a presentation in The Hague, encouraging Christian pro-lifers to continue campaigning against euthanasia. He cited a report in an American medical journal which found that depression and hopeless, rather than pain, were the dominant reasons for patients seeking euthanasia. This should alert Christians to the fact that "our battle is not simply over public policy; it is a battle that reaches deep into the hearts and souls of individuals," Imbody said. NEW YORK, MAR. 14, 2001 (Zenit.org).- In rich nations, the elderly are often neglected, something that is not common in poor countries, the Vatican's permanent observer at the United Nations said recently. "Who are these older persons?" Archbishop Renato Martino asked during the meeting of the Preparatory Committee for the second World Assembly on Aging. "Are older persons those who have reached 60, or 70, or 80 years? Maybe it depends upon the direction from which the age is viewed." The Vatican representative wondered why the elderly in the developed world must end their days "abandoned or forgotten in a care center or nursing home, while so many in the developing world view old age with reverence, and older persons are respected and valued as a treasure of wisdom, tradition and heritage?" Archbishop Martino added that it "is horrible to think that just as the world begins to make great advances in prolonging the lives of individuals, a reverence and respect for life has been lost. It seems impossible to believe that the taking of life has become, in some places, an acceptable alternative." Referring directly to euthanasia, Archbishop Martino said that for "many older persons, such changes in legislation or medical practice, or the threat of those changes, have become a new source of fear and anxiety, and can indeed weaken the fundamental relationship of unconditional trust that they have a right to place in those whose mission is to care for them." To live longer should not be regarded as exceptional, or as "a burden or challenge," but rather as "the blessing that it is. Older persons enrich society," he stressed during the Feb. 26 meeting. Therefore, the "United Nations must ensure that the world is prepared to recognize and respect the human dignity of older persons and enable them to be full participants in society, rather than viewing them as a challenge to the community," the permanent observer concluded. The U.N. General Assembly has decided to convene a second World Assembly on Aging sometime in 2002, in part to adopt a revised plan of action and a long-term strategy on aging. Westminster, England -- Claims that the Labour party plans to legalise euthanasia in England and Wales should it win the general election have been greeted by concern but not surprise by Britain's longest-established pro-life group. A Labour party spokesman refused to confirm whether the party's manifesto would include a commitment to extending the Adults With Incapacity (Scotland) Act to other parts of the UK. Lord Irvine, the Lord Chancellor, said in 1999 that legislation to authorise euthanasia by neglect in England and Wales would be introduced as soon as parliamentary time allowed. Alison Davis, head of the handicap division of the Society for the Protection of Unborn Children (SPUC), said: "There is no doubt that the Scottish legislation has opened the door to allowing vulnerable people to become victims of euthanasia by neglect, and it would be extremely regrettable if it were applied to other parts of the UK as well. Under the Scottish law, doctors could be expected to kill an incapacitated patient by starvation and dehydration. This could be done both on those who have a terminal illness and on those who are not dying but are incapacitated, at the behest of a proxy who may not be aware of the medical situation. The proxy could even stand to benefit from the patient's death." the health, and even the survival, of their patients. against disabled people both inside the womb and after birth. Source: Archives of Internal Medicine 2001;161:421-430. New York, NY -- Even when close relatives know what an individual's living will expresses, chances are those treatment preferences will not be followed, results of a study suggest. A host of prior studies have demonstrated that family members and physicians fare poorly in following an individual's life-sustaining treatment preferences in the absence of a living will (or ``advance directive''), according to Dr. Peter Ditto from the University of California at Irvine, and associates. What has never been tested, though, is whether preferences expressed in a living will are actually honored. The investigators looked at whether the existence of a living will--with and without thorough discussion of its contents among patients and their relatives--actually improved the accuracy with which an individual's surrogates predicted his or her treatment preferences. In the absence of a living will, relatives correctly predicted patient preferences less than 70% of the time, the authors report. Surprisingly, living wills--even with thorough discussions between patients and relatives--failed to improve the accuracy of the surrogates' predictions, the researchers note. In fact, according to the report in the February 12th Archives of Internal Medicine, there was no subgroup of patients or surrogates and no living will intervention that improved the prediction accuracy over that achieved by surrogates of patients with no advance directives. Despite these facts, the investigators observe, both patients and their surrogates believed that the living will and discussions improved the surrogates' understanding of the patients' wishes and increased the surrogates' comfort in making medical decisions for the patients. ``The results of the present study clearly challenge the effectiveness of (living wills) as a means of preserving patients' ability to control specific treatment decisions near the end of life,'' Ditto and colleagues write. ``What is less clear is the extent to which the majority of patients and surrogates desire this level of control and the relative value to assign to the goals of accurate surrogate decision making versus psychological benefits in future policy development,'' the authors conclude. A pro-life alternative to living wills is available called the Will to Live. For more information, contact: National Right to Life, Attn: Will to Live, 419 7th St. NW, Suite 500, Washington, DC 20004. "I'm Your Doctor and I'm Here to Kill You" It's a strange business I'm in. I do talk radio and I write. I know about a lot of things and read everything I can get my hands on. I talk to reporters and scientists and experts and citizens with stories to tell. Most of the time, the subjects we discuss on my programs deal with problems and situations that affect other people. It isn't often that the subject applies to me or my family. That was the case. Not now. A little over two weeks ago, I interviewed a man on my program whom I'd interviewed before. He had written "Forced Exit," a book about euthanasia, what we used to call "mercy killing." Wesley Smith has a new book out now. It's called "The Culture of Death: The Assault on Medical Ethics in America," published by Encounter Books. It's a chilling account of the hidden changes in medical care in this country and more importantly, the deliberate changes in the training of doctors, nurses, ethics personnel and other health-care workers. Remember how most of us were concerned about the wonders of medical technology keeping us alive artificially, making us slaves to tubes and machines? Remember how we all were advised to have living wills which would designate what we didn't want done to us if we were in final and desperate straits? Remember all the money we paid to lawyers to draw up such documents and how when it was done, we felt safe. I won't mince words. What I'm saying is that you and your loved ones are now more in danger of having your life ended by doctors refusing medical care than in having it extended artificially. In his book, Smith describes what is called the "Futile Care Theory." What it means in simple language is that doctors will refuse treatment, any treatment, if they decide that it's your time to die. It won't matter if the patient wants help. It won't matter if the family wants help. The answer will be "no." One week ago, he was transferred to a larger hospital to have blood drained from his chest. He was conscious, rational, could eat and drink on his own and had minimal pain. His only medication was a blood pressure pill, a baby aspirin, a Tylenol if he had pain, and an IV drip with potassium. Hardly what you would expect of a "terminal" case. He was to be transferred back to his original hospital/convalescent care. That's when it all happened, so fast it made our heads spin. The doctors decided on their own that we wanted only pain assistance so they discontinued all the medicines he was getting, including the IV drip. They never asked the family; it was an arbitrary decision. My poor mother, who was alone with Daddy, believed them when they said it was the "best" thing for him. They were doctors after all! Besides, she told me, she was afraid to question them for fear they might do something to hurt Daddy. I implored the head nurse and was told that Daddy was going "through a process"! (A process?) Yes, I was told in all seriousness, my dad was "processing." That's the new way of saying that Daddy was dying. Daddy ate and talked, right up to the end. He even ate two desserts with gusto. I talked to him a few hours earlier and he was his old self. Two days before, I'd asked him if he wanted to die and he said no. The doctor expressed his regrets to my Mom and said he was sorry I was so upset. He said "that often happens with family members who just don't understand and get very emotional." I don't know how he sleeps at night. I just can't wait to get his bill. I got more consolation from the vet when my dog died. Be warned. This is not just my tragedy, this same fate awaits your family because that is what the medical system is teaching their people to do -- to us, their patients, under the guise of medicine. God help us. Marget's Attitude: Is Euthanasia "Humane?" On a KLM flight to Amsterdam two weeks ago, I had a conversation with a member of the crew that chilled me to the bone. It illustrates what happens when the church fails to teach the hard truths of our faith. KLM is the Dutch airline. The flight crew was gracious, but one middle-aged woman called Marget was exceptionally friendly. As she cleared away the breakfast dishes, Marget asked what we were planning to do in Amsterdam. I told her I was speaking at the Billy Graham Conference on Evangelism. I also mentioned that I work in the prisons. In response, Marget told me she was a practicing Catholic and that she sang in a choir that performed in prisons. Since I was talking to a Christian, I thought I'd find out what Marget thought about euthanasia, which, of course, is legal in Holland. I assumed she would find it abhorrent, but to my astonishment, she gave an impassioned defense of it. She said she had seen her grandmother waste away in agony. The family wanted to help her die, but before they could arrange it, she died naturally. I explained to Marget that suffering could be managed without taking life. She replied that she had seen everything tried with her grandmother. I asked if other Dutch Christians shared her views. Yes, she replied--everybody thinks euthanasia is wonderfully humane because it enables us to help eliminate suffering. I challenged her with every argument I could think of. I told her that God puts our souls in our bodies when life begins and that humans cannot make the decision to take it. Marget, always smiling warmly, stood her ground. She insisted that euthanasia is a kind thing -- that it's consistent with the views of good people. Well, I didn't change her; needless to say she did not change me. This woman was sincere about her faith and she really believed she was doing the right, kind, loving, and gentle thing -- yes, in her eyes, a Christian thing. She brought to mind C. S. Lewis's description of how the greatest evil is done not in sordid dens of crime, or even in concentration camps. "In those we see its final result," Lewis notes. "But it is conceived and ordered . . . in clean . . . warmed, and well-lighted offices, by quiet men with white collars . . . who do not need to raise their voices." I confess, I got off that plane shaken. I realized that so often in a culture war, we're not up against evil people who enjoy killing. Instead, we're up against good, decent people who genuinely think it's humane and right to kill. Marget's attitude signals a profound failure of the church. Everywhere we look, our culture is promoting euthanasia, abortion, and infanticide as loving, humane solutions. We even hear abortion of poor children talked about the same way. The challenge of the church is to confront this dangerous philosophy head on. Voluntary euthanasia leads directly to involuntary euthanasia, as is happening in Marget's Holland. You and I must teach the good people around us that euthanasia doesn't raise the curtain on a more "humane" society. Instead, it's the final curtain call on a culture of death. "The news media . . . often promote death as an answer to the serious problems of grave illness and disability . . . gullibly publishing false assertions of euthanasia advocates without checking the facts. "A classic example was the episode on 'mercy killing' that aired on '60 Minutes,' a program that led, ironically, to Jack Kevorkian's undoing. Kevorkian videotaped himself as he murdered Thomas Youk, a man with Lou Gehrig's disease [amyotrophic lateral sclerosis or ALS]. He then took the tape to '60 Minutes' correspondent Mike Wallace, a vocal pro-euthanasia advocate. "In the '60 Minutes' presentation, Kevorkian . . . tells the newsman that he killed Youk, with permission, to keep him from choking to death on his own saliva. Wallace accepted the excuse without blinking an eye . . . . "ALS is indeed a devastating disease. Yet proper medical care prevents people with ALS from choking or suffocating. . . . Accurate information was just a phone call away. Yet Wallace, who became famous for his hard-hitting, acerbic interviews, apparently didn't bother to verify Kevorkian's assertions before airing the program." Diane Arnder, mother of 29-year-old Tina Cartrette, has asked the North Carolina courts to give her the right to kill her daughter by removing a feeding tube that has provided the majority of her nutrition for several years. Cartrette has life-long physical and cognitive disabilities - disabilities with which many are unfamiliar, since medical professionals have so long recommended institutionalization as the treatment of choice, keeping severely disabled people out of sight and out of mind. For those more familiar with disability issues, the media reports of Tina Cartrette's situation leave many unanswered questions. Accepting that Dianne Arnder loved her daughter the way most parents do who institutionalize their children, what kind of love spans the distance between them now, after 25 years living apart? Did Arnder ever become aware of Geraldo Rivera's groundbreaking expose on substandard care, even atrocities, committed against residents of institutions? Did she hear about the many states that have closed all their institutions and moved residents into community settings with in-home support services? Though many parents fight the system to enforce their child's rights, perhaps Arnder was kept uninformed. Her words suggest that she accepted the stereotypes about her daughter, and the antiquated institutional system, without question. During the 1980s, a right to refuse unwanted extraordinary or "heroic" life-sustaining medical treatment was legally defined, a right initially to be applied only to conscious people deemed "mentally competent." The dangers of allowing other decision-makers - insurance companies, physicians, family members, state guardians - to engage in passive euthanasia seemed obvious at first. Like most states, North Carolina has decided that food and water by tube constitutes "medical treatment" that can be refused by guardians "on behalf of" an incompetent individual. This has been allowed even though many people in nursing homes and institutions are on tube feeding because there aren't enough staff to feed them, rather than for medical reasons. But the law limits this narrow right to kill by starvation to (a) people who used to be deemed competent and who legally documented or clearly expressed their wish to reject tube-feeding, or (b) people who were never deemed competent who are terminal or permanently unconscious. It doesn't take a PhD in psychology to recognize just whose misery some family members would like to put their older or disabled relative out of. Are the North Carolina courts being asked, in effect, to decide that some older and disabled individuals are not "persons" entitled to equal protection of the law? It's bad enough that disabled individuals and families are not getting the in-home support services they need, while the government pays more, on average, to keep individuals in nursing homes and other institutions, often against their will. It's bad enough that insurance coverage is frequently denied for necessary care, and that doctors don't know or don't disclose important information to patients and families, including the physician's financial conflicts of interest in managed care. Washington, DC -- As Oregon reported that 27 terminally ill people used the state's assisted suicide law to end their lives last year, one of the state's senators urged the Bush administration not to do anything that would thwart the unique statute. ``There is no evidence of a crisis that would compel the federal government to pursue extraordinary means to overturn Oregon's law,'' pro-assisted suicide Senator Ron Wyden (D-OR) wrote to pro-life Attorney General John Ashcroft, amid indications that pro-life lawmakers may try again to undo the law. ``There has been no substantiated claim of abuse of Oregon's law, nor has there been a rush to use the Oregon law,'' Wyden wrote. Oregon is the only state that allows terminally ill patients to die with a doctor's help. The state Health Division announced that 27 people used the law in 2000, the same number as the previous year. At least 70 people have ended their lives through assisted suicide since the so-called Death With Dignity Act took effect in October 1997, according to a report published in Thursday's New England Journal of Medicine. Opponents were dealt a blow in 1998 when then-Attorney General Janet Reno ruled federal drug agents cannot move against doctors who help terminally ill patients die under Oregon's law. To try to circumvent Reno's order, pro-life Sen. Don Nickles promoted a bill last session, the Pain Relief Promotion Act, that would have revoked the licenses to prescribe drugs of doctors who deliberately use federally controlled substances to aid a patient's death. The bill, stridently opposed by Wyden, never reached the floor for a vote. During last year's campaign, Bush said he would have supported the legislation. Now, assisted suicide opponents hope he will issue an executive order to make it more difficult for doctors to use the Oregon law. Pro-life Sen. Gordon Smith (R-OR) who supported the Nickles bill, called Bush and followed up with a letter in January asking that any executive order ensure that doctors still be allowed to treat excessive pain and avoid retroactive punishment for patients they had previously helped to die. Nickles, the No. 2 Senate Republican, has not reintroduced the bill. His spokeswoman, Gayle Osterberg, wouldn't comment on discussions between the White House and the senator or his staff. Ashcroft, then a Missouri senator, was not among the Nickles bill's 41 co-sponsors. He has spoken, however, against using government money, such as Medicare or Medicaid, to support the state law. ``We should not hook up Dr. Kevorkian to the United States Treasury,'' he said in 1997, referring to the Michigan man who had performed dozens of assisted suicides. Every terminally patient who has died under the law took a federally controlled substance, such as a barbiturate. To request a prescription, patients must be 18 years or older, an Oregon resident, capable to communicate health care decisions and diagnosed with an illness that will lead death within six months. Today's report from the Oregon Health Division on legally permitted physician-assisted suicides in 2000 provides no adequate information on abuses of the state's guidelines, and is not designed to do so. The 27 assisted suicides reported for this third year of Oregon's 'experiment' in lethal medicine are simply those cases which the physician-perpetrators themselves chose to report. The total number of actual cases, not to mention the number of times various 'safeguards' were distorted or simply ignored, remains concealed in the name of physician-patient confidentiality. A startling 63% of these patients (compared to 26% in 1999) cited fear of being a 'burden on family, friends or caregivers' as a reason for their suicide. Some patients and families are learning all too well the deeper message of Oregon's law: terminally ill patients have received this special 'right' to state-approved suicide not because they are special in any positive way, but because they are seen as special burdens upon the rest of us. 30% cited concern about 'inadequate pain control' as a reason for their death (compared to 26% the year before), despite claims by the Oregon law's defenders that legalizing assisted suicide would improve pain control and eliminate such concerns. Also rising is the percentage of victims who were married (67%, up from 44%) and who were female (56%, up from 41%). It seems some older married women in Oregon are receiving the message that they are a 'burden' on their husbands, and then acquiescing in assisted suicide. Despite a medical consensus that the vast majority of suicidal wishes among the sick and elderly are due to treatable depression, in only 19% of these cases (compared to 37% the previous year) did the doctor bother to refer the patient for a psychological evaluation. The median time between a patient's initial request for assisted suicide and his or her death by overdose also decreased markedly, from 83 days to 30 days. Oregon's experiment is taking on more of the features of an assembly line. These signs of the 'slippery slope' in action, illustrating trends predicted by critics of the Oregon law, underscore the need to end this state's experiment before it claims more lives. Boston, MA -- An analysis of 69 assisted suicides supervised by Jack Kevorkian has concluded that 75 percent of his patients were not terminally ill when he helped them to die, and that autopsies could not confirm any physical disease in five of the cases. The study's findings were reported in a letter to the New England Journal of Medicine and were made available on Wednesday. The journal, which will be published on Thursday, said a team led by Lori A. Roscoe of the University of South Florida at Tampa looked at the characteristics of people who died with Kevorkian's assistance in Oakland County, Michigan between 1990 and 1998. Kevorkian, who helped more than 100 people commit suicide, is serving a prison sentence of 10 to 25 years in Michigan. He was convicted of second degree murder in April 1999 in a trial that followed an appearance on national television in which he administered a lethal injection to Thomas Youk, a 52-year-old man suffering from Lou Gehrig's disease, and dared the criminal justice system to stop him. The study's findings seemed to suggest divorcees or people who had never married were more likely to turn to assisted suicide in the absence of safeguards. Roscoe and her colleagues said "persons who were divorced or had never married were over-represented among those who died with Kevorkian's help, suggesting the need for a better understanding of the familial and psychosocial context of decision making at the end of life." They said only 17 of the 69 patients were found after autopsy to be terminally ill and likely to live less than six months. The wish of the other 52 people to get help from Kevorkian might be explained by the fact that "72 percent of the patients had had a recent decline in health status that may have precipitated the desire to die," the researchers said. Of the 69 patients, 71 percent were women, which "is noteworthy because suicide rates are usually lower among women than among men," they concluded. The Roscoe team only looked at the Michigan cases because the procedures of medical examiners in other states may have varied. Kevorkian's attorney, Mayer Morganroth, dismissed the study. "All they're doing is repeating allegations made by the pro-life people," he told Reuters. "They're not really of any real substance, and they're not really accurate or true." He also attacked the authors, pointing out that Roscoe and another person involved were not medical doctors, and that a third person involved, Oakland County medical examiner L.J. Dragovic, had testified numerous times against Kevorkian and the two men were "bitter enemies."
2019-04-22T06:02:04Z
http://nebrccc.org/current_news_items.htm
Based on record reviews and interviews, the hospital failed to ensure the direct care staff were trained and evaluated for competency in the use of nonphysical intervention skills as evidenced by having no documented evidence of the qualifications of the individuals training the direct care staff in crisis prevention interventions (CPI) for 11 (S2, S5, S8, S9, S12, S13, S14, S15, S26, S27, S28) of 12 (S2, S5, S8, S9, S12, S13, S14, S15, S18, S26, S27, S28) direct care staff personnel files reviewed for competency from a total of 26 employed direct care personnel. Review of the personnel files of S5RN, S8MHT (Mental Health Tech), S9RN, S13LPN (Licensed Practical Nurse), S14RN, S15MHT, and S26MHT revealed their training and evaluation of competency for performing CPI was conducted by S2DON (Director of Nursing). Review of S2DON's personnel file revealed her training and evaluation of competency for performing CPI was conducted by S25APRN (Advanced Practice registered Nurse) on 01/05/15. Review of S12LPN's personnel file revealed her training and evaluation of competency for performing CPI was conducted by S25APRN on 01/06/14. Review of S25APRN's credentialing file revealed her certification in "The Crisis Prevention Institute, Inc. and the International Association of Nonviolent Crisis Intervention Certified Instructors" program expired on [DATE]. Further review revealed she was trained in CPI by S2DON on 08/10/14. In an interview on 05/20/15 at 4:30 p.m., S7HR Dir (Human Resource Director) confirmed that S2DON had been conducting CPI training and competency evaluations for the hospital's nursing personnel. In an interview on 05/21/15 at 4:15 p.m., S2DON confirmed she had been trained in CPI by S25APRN, and she had trained S25APRN on 08/10/14. S2DON confirmed that she had conducted the CPI training for S5RN, S8MHT, S9RN, S13LPN, S14RN, S15MHT, and S26MHT. During the interview S2DON was asked to present evidence of S25APRN's qualifications as an instructor of CPI when she trained S2DON on 01/05/15. No documented evidence of qualifications of S2DON and S25APRN as qualified instructors of CPI were presented as of the completion of the survey on 05/21/15 at 6:20 p.m. 1) Failing to measure, analyze, and track quality indicators and other aspects of performance that assess processes of care, hospital services, and operations as evidenced by failure to have documented evidence of collection of data, analysis, and tracking of quality indicators for the first quarter and the beginning of the second quarter of 2015 (see findings in tag A0273). 2) Failing to ensure quality assessment and performance improvement (QAPI) data collected was used to identify opportunities for improvement and changes that will lead to improvement. There were 7 opportunities for improvement identified during the survey that had not been identified, trended, tracked, and analyzed with corrective action implemented by the hospital (see findings in tag A0283). 3) Failing to ensure it tracked adverse patient events, analyzed their causes, and implemented preventive actions and mechanisms that include feedback and learning throughout the hospital as evidenced by having no documented evidence that 7 patient falls that occurred in April 2015 were tracked, analyzed, and preventive actions and were mechanisms implemented (see findings in tag A0286). 4) Failing to develop and implement a policy for conducting performance improvement projects as evidenced by having no documented evidence that the hospital's "Performance Improvement Plan" addressed performance improvement projects and not having conducted any performance improvement project (see findings in tag A0297). 5) The governing body failing to ensure an ongoing program for quality improvement and patient safety is implemented and maintained and determines the number of distinct improvement projects that will be conducted annually (see findings in tag A0309). 6) The governing body failing to provide adequate resources for measuring, assessing, improving, and sustaining the hospital's performance and reducing risk to patients as evidenced by having no person designated the responsibility for the quality assessment and performance improvement (QAPI) program after S12LPN (Licensed Practical Nurse) left employment on 02/13/15 (see findings in tag A0315). Based on record reviews and interviews, the hospital failed to measure, analyze, and track quality indicators and other aspects of performance that assess processes of care, hospital services, and operations as evidenced by failure to have documented evidence of collection of data, analysis, and tracking of quality indicators for the first quarter and the beginning of the second quarter of 2015. Review of the hospital's "Performance Improvement Plan", presented by S12LPN (Licensed Practical Nurse) as the current plan, revealed the program incorporates the functions of quality monitoring, evaluation, and improvement; utilization management; infection surveillance/prevention/control; safety and risk management of environment of care; information management; staff development; clinical competence; and grievance/ethical issues resolution. Further review revealed final responsibility for performance improvement in the provision of quality services rests with the Chief Executive Officer (CEO). The CEO shall receive quarterly reports of ongoing monitoring and improvement activities, identified trends, and/or potential risk exposure concerns, accompanied by narrative interpretations, and a progress report or recommendations on problem solving and performance improvement. The Director of Performance Improvement coordinates, assures the integration of, and monitors all activities of the program and provides regular and quarterly summary reports to the senior Administration person, senior management personnel, Medical and Professional Staff, and the governing board. The Performance Improvement Committee meets at least monthly to review and analyze data for any patterns or trends or opportunities to improve performance, make recommendations for corrective actions, and to monitor effectiveness of corrective actions taken. Review of "Performance Improvement Committee Meeting" minutes presented by S12LPN revealed a meeting was held on 01/14/15 to discuss monitoring for the month of December 2014. No documented evidence of monthly meetings for January, February, March, and April 2015 were presented as of the completion of the survey on 05/21/15 at 6:20 p.m. No documented evidence of data collection, analysis, and trending of the hospital's quality indicators was presented for January, February, March, and April 2015. In an interview on 05/20/15 at 11:20 a.m., S6LPN indicated she had recently been assigned responsibility for QAPI about 3 weeks ago by S1Admin (Administrator). She further indicated she told him she was willing to assist and take it on, but the whole program needed to revamped first. She further indicated she had not signed a job description for Director of Performance Improvement. S6LPN indicated the performance improvement program was "constricted, basically useless", had monitors that didn't capture what needed to be captured, and was very brief with no detail. S6LPN indicated the QAPI program was under the previous infection control nurse who was no longer here. She further indicated that since she was hired in October or November 2014, she had sat in one QAPI meeting, and nothing related to performance improvement was discussed. She indicated the meeting was more like a social gathering. S6LPN indicated she had no QAPI data or meeting minutes to present from the previous infection control nurse. In an interview on 05/20/15 at 12:10 p.m., S12LPN joined a meeting in progress with S6LPN. S12LPN indicated she was responsible for QAPI before she left employment. She further indicated she just returned to work PRN (as needed). She presented a spread sheet dated 01/14/14 and said it should have been 01/14/15, since the meeting was held on 01/14/15. S12LPN indicated the data reviewed at this meeting was from the December 2014 monitoring. She further indicated when she left, all data was up-to-date and doesn't know "who messed with it." S12LPN indicated she found her agenda for February 2015 but doesn't know where the minutes are. In an interview on 05/20/15 at 3:00 p.m., S12LPN indicated her last day of work was 02/13/15. She further indicated S6LPN "will have to find that stuff (referring to QAPI data) because I wasn't around." She confirmed she didn't have any further QAPI data to present other than the meeting minutes for January 2015. In an interview on 05/21/15 at 8:05 a.m., S1Admin indicated S6LPN was in charge of QAPI. Based on record reviews and interviews, the hospital failed to ensure quality assessment and performance improvement (QAPI) data collected was used to identify opportunities for improvement and changes that will lead to improvement. There were 7 opportunities for improvement identified during the survey that had not been identified, trended, tracked, and analyzed with corrective action implemented by the hospital. Further review revealed final responsibility for performance improvement in the provision of quality services rests with the Chief Executive Officer (CEO). The Director of Performance Improvement coordinates, assures the integration of, and monitors all activities of the program and provides regular and quarterly summary reports to the senior Administration person, senior management personnel, Medical and Professional Staff, and the governing board. Review of the "PI (performance improvement) Reporting" for 2014 and January 2015 (data collection for December 2014) revealed hand hygiene was 88% (per cent) in November 2014 and 85% in January 2015 with no documented evidence that a corrective action plan was developed and implemented to address this identified opportunity for improvement. Further review revealed biohazard waste through handled through contract with Nursing Home A was 75% in January 2015 with no documented evidence that corrective action or continued monitoring would occur. Further review revealed the contract pharmacy was at 83% in January 2015 with no documented evidence that corrective action or continued monitoring would occur. Review of variance reports, presented by S2DON (Director of Nursing), revealed there was 7 patient falls from 04/06/15 through 04/27/15 with no documented evidence of tracking, trending, and corrective action initiated to address this opportunity for improvement. 7) Failing to implement a system for identifying, investigating, and controlling infections and communicable diseases of patients and personnel that resulted in an Immediate Jeopardy situation that was identified on 05/14/15 at 5:20 p.m. Based on record reviews and interviews, the hospital failed to ensure it tracked adverse patient events, analyzed their causes, and implemented preventive actions and mechanisms that include feedback and learning throughout the hospital as evidenced by having no documented evidence that 7 patient falls that occurred in April 2015 were tracked, analyzed, and preventive actions and were mechanisms implemented. In an interview on 05/21/15 at 4:15 p.m., S2DON confirmed that no analysis or corrective action had been implemented to address the 7 patient falls that occurred in April 2015. Based on record reviews and interview, the hospital failed to develop and implement a policy for conducting performance improvement projects as evidenced by having no documented evidence that the hospital's "Performance Improvement Plan" addressed performance improvement projects and not having conducted any performance improvement project. Review of the hospital's "Performance Improvement Plan", presented by S12LPN (Licensed Practical Nurse) as the current plan, revealed no documented evidence that performance improvement projects was addressed in the hospital. In an interview on 05/21/15 at 12:50 p.m., S12LPN (Licensed Practical Nurse) confirmed the hospital's "Performance Improvement Plan" did not address performance improvement projects, and the hospital had never conducted a performance improvement project. Based on record reviews and interview, the hospital failed to ensure that all medical records had a medical history and physical examination (H&P) completed and documented no more than 30 days before or 24 hours after admission as evidenced by having 1 (#6) of 6 (#1 - #6) patient records reviewed for a completed and documented H&P within 24 hours of admission without an H&P. Review of the hospital's "Rules and Regulations For The Professional Medical Staff", presented as the current rules and regulations by S1Admin (Administrator), revealed that a complete admission history and physical examination shall be recorded within 24 hours of the patient's admission. Review of patient #6's medical record revealed she was a [AGE] year old female admitted on [DATE] at 8:00 p.m. with diagnoses of Schizoaffective Disorder, Bipolar Type, currently depressed, Hypertension, and Hypercholesterolemia. Review of Patient #6's entire medical record on 05/20/15 at 3:30 p.m., 43 and 1/2 hours after admit, revealed no documented evidence that an H&P had been completed and documented. In an interview on 05/20/15 at 3:30 p.m., S9RN (Registered nurse) confirmed that an H&P had not been conducted by a physician for Patient #6. Based on record reviews, observation, and interviews, the hospital failed to ensure the pharmacy or drug storage area was administered in accordance with hospital policies and procedures and its contract with Pharmacy A as evidenced by failing to destroy medications in accordance with hospital policy and failing to develop a system for destruction of controlled substances when informed by S19RPh (Registered Pharmacist) with Pharmacy A and S20Contract RPh that they could no longer destroy controlled substances. Review of the hospital policy titled "Pharmacy Services", presented as a current policy by S2DON (Director of Nursing), revealed that the Director of Pharmacy is responsible for the removal of all recalled, expired, or damaged medications. Review of the "Pharmacy Services Agreement" entered into on 08/16/13 with Pharmacy A, presented by S1Admin (Administrator), revealed that responsibilities of the pharmacy included the rendering of pharmacy services in accordance with any applicable requirements of Louisiana Board of Pharmacy guidelines; local, state, and federal laws and regulations; community standards of practice; and pharmacy's policies and procedures manual and hospital's policies and procedures manual and related policies and procedures. Observation on 05/14/15 at 2:10 p.m. revealed a locked safe in a locked drawer of S2DON's desk that contained multiple controlled substances that were either expired or remained after a patient was discharged . In an interview on 05/14/15 at 1:35 p.m., when asked how controlled medications are handled after discharge, S2DON indicated they are logged for donation purposes and locked in the medication room. When asked if she ever kept controlled substances in her office, S2DON indicated narcotics that are expired are kept double-locked in her desk drawer in her office until they can be destroyed. She further indicated S20Contract RPh said he couldn't destroy controlled substances anymore, and the hospital had not "figured a plan to destroy them." S2DON indicated she didn't feel comfortable destroying controlled substance without having a pharmacist present, so she's holding the ones she has in her desk drawer until the pharmacist comes to the hospital. In a telephone interview on 05/21/15 at 8:40 a.m., S20Contract RPh indicated he was contracted by the hospital to conduct the monthly pharmacy inspections and perform chart audits. When asked about whether he destroys controlled substances, S20Contract RPH indicated he originally was given permission by the Louisiana Board of Pharmacy to destroy narcotics, but with changes to the new law in December 2014, narcotics have to be given to a collector or a third party to destroy. He further indicated he doesn't destroy any medications. When asked if it was acceptable standards of practice for the DON to store controlled substances in her desk drawer, he indicated it was alright, as long as the DON has the only key and keeps a log of what;s contained in the lock box. He further indicated if medications are sent to the hospital by Pharmacy A with a patient-specific label, the medication can be given to the patient at discharge, but most hospitals don't do that. In a telephone interview on 05/21/15 at 8:55 a.m., S19RPh with Pharmacy A confirmed he is the designated pharmacist responsible for Kailo Behavioral Hospital. When asked if he destroys medications and controlled substances, he indicated "no", it's done at the hospital. He further indicated he didn't know how medications were being destroyed at the hospital. S19RPh with Pharmacy A indicated if there's an expired medication in the med-dispense system supplied by Pharmacy A, he can take it back, but he doesn't know how patient-specific medications sent to the hospital are destroyed. He further indicated he's responsible for getting ordered medications to the hospital and for getting medications into the med-dispense system. S19RPh with Pharmacy A indicated once a medication is in the patient's name, "it's out of our control." When asked if it's acceptable standard of practice to have controlled substances locked in the DON's desk drawer, he indicated that's the way a lot of facilities handle it. He further indicated if the physician doesn't want the patient to take his/her patient-specific medications home when they are discharged , the patient can destroy the medications while witnessed by the nurse. Failing to ensure the chief executive officer (CEO) appointed by the governing body was responsible for managing the hospital as evidenced by failing to ensure the hospital was in compliance with the Conditions of Participation of QAPI (quality assessment and performance improvement), Nursing Services, Medical Record Services, and Infection Control (see findings in tag A0057). Based on record reviews and interviews, the hospital failed to ensure the chief executive officer (CEO) appointed by the governing body was responsible for managing the hospital as evidenced by failing to ensure the hospital was in compliance with the Conditions of Participation of QAPI (quality assessment and performance improvement), Nursing Services, Medical Record Services, and Infection Control. 3) being responsible for assuring that the hospital is in conformity with the requirements of planning, regulatory, and inspecting agencies. 6) The governing body failing to provide adequate resources for measuring, assessing, improving, and sustaining the hospital's performance and reducing risk to patients as evidenced by having no person designated the responsibility for the quality assessment and performance improvement (QAPI) program after S12LPN (Licensed Practical Nurse) left employment on 02/13/15. c) Failing to assess a patient's blood pressure prior to administration of antihypertensive, antipsychotic, anti-anxiety, and antidepressant medications as required by hospital policy for 2 (#3, #6) of 3 (#3, #5, #6) current inpatient records reviewed for nursing assessments from a sample of 6 patients. 2) Failing to develop a system for coding and indexing medical records that allowed timely retrieval by diagnosis. 1) Failing to implement a system for identifying, investigating, and controlling infections and communicable diseases of patients and personnel that resulted in an Immediate Jeopardy (I.J.) situation that was identified on 05/14/15 at 5:20 p.m. The I.J. remained in place as of the conclusion of the survey on 05/21/15 at 6:20 p.m. i) Failure to perform active surveillance of handwashing and use of PPE as evidenced by having no documented evidence of handwashing surveillance from 01/01/15 through 05/21/15. 6) Failing to ensure the chief executive officer, the medical staff, and the director of nursing assured the hospital-wide QAPI program addressed problems identified by the infection control officer and was responsible for the implementation of successful corrective action plans in affected problem areas as evidenced by failure to have documented evidence of the collection of, tracking, and analysis of infection control data with corrective action plans for identified problems from 01/01/15 through the time the survey was completed on 05/21/15 at 6:20 p.m. In an interview on 05/18/15 at 11:05 a.m., S1Administrator confirmed he was currently the Administrator of the hospital and responsible for management of the hospital. Based on record reviews and interview, the hospital failed to ensure a patient who filed a grievance was provided written notice of the hospital's decision that contained the name of the hospital contact person, the steps taken on behalf of the patient to investigate the grievance, the results of the grievance process, and the date of completion as evidenced by failure to have documented evidence that a resolution letter was sent for 1 (R1) of 1 patient grievance reviewed. Review of the "Grievance/Complaint Log", presented by S2DON (Director of Nursing), revealed there were 13 grievances logged for 2014 and 1 logged for 2015. Review of the investigation of the grievance reported by Patient R1, presented by S12LPN (Licensed Practical Nurse), revealed that Patient R1 complained of "a tall skinny man grabbed his hair and slapped him." Further review revealed S12LPN interviewed Patient R1 twice and interviewed 2 mental health techs and the charge nurse. Review of the documentation presented by S12LPN revealed no documented evidence that a resolution letter was prepared and sent to Patient R1 at the conclusion of the investigation as required by the Patient Rights certification standards. In an interview on 05/21/15 at 8:50 a.m., S12LPN confirmed that a resolution letter had not been sent to Patient R1 upon completion of the investigation of his grievance. Based on record reviews and interview, the hospital failed to ensure the use of restraint was in accordance with the order of a physician as evidenced by failing to obtain and document a physician's order for restraints for 1 (#1) of 2 (#1, #4) discharged patient records reviewed of patients who had restraints applied from a sample of 6 patients. Review of the hospital policy titled "Seclusion & (and) Restraint", presented as a current policy by S2DON, revealed that the use of a restraint or seclusion must be in accordance with the order of a physician or design (licensed independent practitioner, a trained registered nurse or physician assistant) permitted by the state and hospital to order seclusion or restraint. Review of the "Seclusion and Restraint Log" presented by S2DON revealed Patient #1 had a physical hold and chemical restraint implemented on 01/26/15, 01/27/15, and 01/30/15. Review of Patient #1's medical record revealed he was a [AGE] year old male admitted on [DATE] and discharged on [DATE]. Further review revealed his diagnoses included [DIAGNOSES REDACTED][DIAGNOSES REDACTED]. "On 01/26 the nursing staff reports Patient #1 required a physical restraint for twenty minutes, a chemical restraint as other interventions had failed including verbal de-escalation and redirection." "On 01/27 the nursing staff reports Patient #1 required a physical restraint for fifteen minutes with a PRN (as needed) IM (intramuscular) injection." "On 01/30 the nursing staff reports Patient #1 required a physical restraint for fifteen minutes, a chemical restraint as other interventions had failed including verbal de-escalation and redirection." Review of Patient #1's physician orders revealed no documented evidence of physician orders for physical restraints or physical holds on 01/26/15, 01/27/15, and 01/30/15. In an interview on 05/21/15 at 4:15 p.m., S2DON indicated she couldn't explain why a physician's order was not documented when Patient #1 had restraints. She confirmed that the nurse is supposed to obtain and document a physician's order. Based on record reviews and interview, the governing body failed to ensure an ongoing program for quality improvement and patient safety is implemented and maintained and determines the number of distinct improvement projects that will be conducted annually. Review of Governing Board meeting minutes for 08/06/14, 12/12/14, 02/11/15, 03/02/15, and 03/27/15 revealed no documented evidence that QAPI (quality assessment and performance improvement) reports were presented and discussed. In an interview on 05/21/15 at 4:15 p.m., S2DON (Director of Nursing) confirmed the governing meeting minutes had no documented evidence that QAPI was discussed or presented (S2DON interviewed due to the absence of S1Administrator at the time of the interview). Based on record reviews and interviews, the governing body failed to provide adequate resources for measuring, assessing, improving, and sustaining the hospital's performance and reducing risk to patients as evidenced by having no person designated the responsibility for the quality assessment and performance improvement (QAPI) program after S12LPN (Licensed Practical Nurse) left employment on 02/13/15. In an interview on 05/20/15 at 11:20 a.m., S6LPN indicated she had recently been assigned responsibility for QAPI about 3 weeks ago by S1Admin (Administrator). She further indicated she told him she was willing to assist and take it on, but the whole program needed to revamped first. She further indicated she had not signed a job description for Director of Performance Improvement. S6LPN indicated the QAPI program was under the previous infection control nurse who was no longer here. S6LPN indicated she had no QAPI data or meeting minutes to present from the previous infection control nurse. In an interview on 05/21/15 at 8:05 a.m., S1Admin indicated S6LPN was in charge of QAPI. He had no explanation when informed that S6LPN indicated during her interview that she had not accepted responsibility as Director of Performance Improvement. 1) Failing to ensure a registered nurse (RN) assigned the nursing care of each patient to other nursing personnel in accordance with the patient's needs and the specialized qualifications and competence of the nursing staff as evidenced by having no documented evidence of the qualifications of the individuals training the nursing staff in crisis prevention interventions (CPI) for 8 (S2, S5, S8, S9, S13, S14, S15, S26) of 9 (S2, S5, S8, S9, S13, S14, S15, S18, S26) nursing staff personnel files reviewed for competency from a total of 23 employed nursing personnel (see findings in tag A0397). a) Failing to implement physician orders for Contact Precautions for 1 (#3) of 1 current inpatient with physician orders for Contact Precautions from a total of 3 (#3, #5, #6) current inpatients and a sample of 6 patients. b) Failing to ensure an accurate skin assessment was performed on admission as evidenced by the RN documenting Patient #3's integumentary system as "normal" on 05/12/15 at 2:10 p.m. and S10MD (Medical Doctor) documenting on 05/12/15 at 6:07 p.m. a diagnosis of Scabies for 1 (#3) of 6 (#1 - #6) patient records reviewed for skin assessments from a sample of 6 patients. 1) Failing to implement physician orders for Contact Precautions for 1 (#3) of 1 current inpatient with physician orders for contact Precautions from a total of 3 (#3, #5, #6) current inpatients and a total sample of 6 patients. 2) Failing to ensure an accurate skin assessment was performed on admission as evidenced by the RN documenting Patient #3's integumentary system as "normal" on 05/12/15 at 2:10 p.m. and S10MD (Medical Doctor) documenting on 05/12/15 at 6:07 p.m. a diagnosis of [DIAGNOSES REDACTED]#6) patient records reviewed for skin assessments from a sample of 6 patients. 3) Failing to assess a patient's blood pressure prior to administration of antihypertensive, antipsychotic, anti-anxiety, and antidepressant medications as required by hospital policy for 2 (#3, #6) of 3 (#3, #5, #6) current inpatient records reviewed for nursing assessments from a sample of 6 patients. 4) Failing to ensure a patient's CBG (capillary blood glucose) reading was documented on the MAR for 1 (#6) of 2 (#5, #6) current inpatients with sliding scale insulin orders from a total of 3 current inpatients (#3, #5, #6) and a total sample of 6 patients. 5) Failing to obtain and document a physician's order for restraints for 1 (#1) of 2 (#1, #4) discharged patient records reviewed of patients who had restraints applied from a sample of 6 patients. 6) Failing to ensure ordered labs were drawn timely with results documented on the chart (#1, #2) and labs were drawn only upon receipt of a physician's order (#1) for 2 (#1, #2) of 6 (#1 - #6) patient records reviewed for labs from a sample of 6 patients. 7) Failing to ensure the RN assessed a patient's wound and performed wound care in accordance with physician orders and hospital policy for 1 (#2) of 1 patient record reviewed with a wound from a sample of 6 patients. Review of the hospital titled "Management of Outbreaks (Lice/Scabies)", originated August 2013 and presented as a current policy by S2DON (Director of Nursing), revealed that the patient suspected of having Scabies would be immediately placed in Contact Isolation. The patient's room door will be kept closed. Transmission-based protocols (contact precautions) will be followed until completion of treatment and 8 hours thereafter. All contaminated towels/linen are to be handled with care with the employee using appropriate PPE such as gloves and gowns. 1) The patient will be assigned a private room. 2) A sign will be posted outside the door indicating Contact Precautions. 3) In addition to Standard Precautions, all employees are required to wear gloves prior to entering the patient's room and prior to having direct contact with the patient. A gown is worn if soiling of clothes is likely while providing care. Contaminated linen will be handled using appropriate PPE based on the level of saturation, then bagged and routed for industrial cleaning. 4) Door does not have to remain closed. 5) In the case of Scabies only disposable single patient vital sign equipment will be used. 1) Remember that persons with crusted scabies are infested with very large numbers of mites; this increases the risk of transmission both from brief skin-to-skin contact and from contact with items such as bedding, clothing, furniture, rugs, carpeting, floors, and other fomites that can become contaminated with skin scales and crusts shed by a person with crusted scabies. 2) Use contact precautions with protective garments (e.g. gowns, disposable gloves, shoe covers, etc.) when providing care to any patient with crusted scabies until successfully treated; wash hands thoroughly after providing care to any patient. 3) Isolate patients with crusted scabies from other patients who do not have crusted scabies. 4) Maintain contact precautions until skin scrapings from a patient with crusted scabies are negative; persons with crusted scabies generally must be treated at least twice, a week apart; oral Ivermectin may be necessary for successful treatment. 5) Identify and treat all patients, staff, and visitors who may have been exposed to a patient with crusted scabies or to clothing, bedding, furniture or other items (fomites) used by such a patient; strongly consider treatment even in equivocal circumstances because controlling an outbreak involving crusted scabies can be very difficult and risk associated with treatment is relatively low. 6) Treat patients, staff, and household members at the same time to prevent reexposure and continued transmission. Review of Patient #3's physician orders revealed a telephone order received by S9RN from S10MD on 05/12/15 at 6:35 p.m. for Contact Precautions and no roommate. Review of Patient #3's "Multidisciplinary Progress Note" documented on 05/12/15 at 2:10 p.m. by S9RN revealed Patient #3 was placed on fall, suicide, and choking precautions. There was no documented evidence that he was placed on Contact Precautions. Observation on 05/14/15 at 9:25 a.m. during the hospital tour revealed no observation of any patient room with a sign designating Contact Precautions. Observation on 05/14/15 at 2:34 p.m. revealed a hand-written sign for "Contact Precautions" on the door of Patient #3. There was no documented evidence that the sign indicated the type of PPE that was to be used. Observation on 05/15/15 at 8:05 a.m. revealed a sign on Patient #3's door that read as "Stop Contact Precautions Use Bio-hazard Bags Proper Hand Hygiene." There was no documented evidence that the sign indicated the type of PPE that was to be used. In an interview on 05/14/15 at 1:35 p.m., S2DON indicated Patient #3 was ordered to be on contact precautions due to having Scabies on a previous admission. She further indicated he is not confined to his room. She confirmed that Contact Precautions had not been implemented as ordered. In an interview on 05/14/15 at 3:50 p.m., S5RN confirmed that she placed the Contact Precaution sign on Patient #3's door after the surveyor had arrived on the morning of 05/14/15. Observation on 05/14/15 at 9:50 a.m. revealed S4Contract Housekeeper with Nursing Home A cleaning patient rooms no isolation gown over her scrubs as PPE for Contact Precautions. Observation on 05/14/15 at 2:34 p.m. revealed Patient #3, who was ordered to be on Contact Precautions, was outside on the patio with 2 other patients and 2 staff members. In an interview on 05/14/15 at 10:10 a.m., S4Contract Housekeeper with Nursing Home A indicated she doesn't usually clean in the hospital, but was assigned today. She further indicated the nursing staff didn't inform her whether any patients had infections that would require her to take special precautions when cleaning, and she didn't ask anyone for a report. She confirmed she didn't wear an isolation gown when she cleaned Patient #3's room. In an interview on 05/15/15 at 9:45 a.m., S10MD confirmed Patient #3's Scabies is Crusted Scabies. He indicated Patient #3's hands are scarred and thick and scaly. S10MD indicated he did not do a scraping. He indicated he received a telephone call from a nurse at the hospital (don't remember name) on 05/13/15 telling him the hospital wasn't sure they could get Elmite timely, and without a scraping he/she thought the cost would be an issue. He indicated his change in treatment from 7 days to 2 days was based on his conversation with the nurse. He confirmed that treatment with the oral medication is part of the treatment, and Patient #3's treatment wouldn't be considered complete until the oral medication was finished. He indicated he didn't expect the patient to be confined to his room when he ordered contact precautions. He expected no intimate contact and no sharing of a room. He indicated that he shouldn't be allowed to sit on sofas or chairs with cushions, and surfaces that he touched should be cleaned. He indicated it would be advisable to treat everyone with symptoms as a precaution, or they could treat everyone who had been in contact with the patient, since no precautions had been taken. Review of Patient #1's medical record revealed he was a [AGE] year old male admitted on [DATE] and discharged on [DATE]. Further review of his H&P documented by S10MD 01/26/15 at 6:33 p.m. revealed Patient #1 had Scabies. There was no documented evidence that Patient #1 was ordered to be on Contact Precautions. Review of Patient #1's physician orders revealed an order on 01/29/15 at 9:00 a.m. to continue isolation for rash (no documented evidence of a previous physician order for isolation). Review of Patient #1's nursing notes revealed no documented evidence that Patient #1 was on Contact Precautions until the nursing note of the day shift on 01/29/15. Further review of his nursing notes revealed the only notes with documentation of Contact Precautions being implemented from his admission on 01/26/15 through his discharge on 02/03/15 were entries made on 01/27/15 at 6:00 a.m., 01/29/15, 02/02/15, and 02/03/15. In an interview on 05/18/15 at 11:25 a.m., S10MD indicated technically if medication was applied that night (of admission), it was o.k. He further indicated if itching continued in 2 weeks, he would re-treat the patient. He further indicated he typically wouldn't isolate for scabies, but everything patient came in contact with should have been cleaned. S10MD indicated he "probably should have put the first patient (Patient #1) in isolation, it would have been the most precautionary, but I don't usually do it." Review of Patient #3's "Comprehensive Integrated Assessment Nursing Assessment - Part 2", documented by S9RN on 05/12/15 at 2:10 p.m., revealed his integumentary system was documented as normal color and "recently treated for scabies." There was no documented evidence that a rash was noted on his hands as documented by S10MD approximately 4 hours later. In an interview on 05/18/15 at 10:10 a.m., S9RN offered no explanation for no documentation of Patient #3's rash to his hands on her nursing admit assessment. Review of the hospital policy titled "Medication Administration", presented as a current policy by S2DON, revealed that blood pressure will be obtained and recorded on the MAR (Medication Administration Record) prior to each dose of antihypertensive medication for 3 days, then every morning for one week, and weekly thereafter unless otherwise ordered by the physician. The licensed prescriber is to be notified before administering medication in the event that the blood pressure is below 90/60 or if there is any significant change in blood pressure. Further review revealed the blood pressure will be monitored for patients receiving antipsychotic, anti-anxiety, and antidepressant medications prior to each dose for 3 days, then once weekly, and record on the MAR. Review of Patient #3's physician orders revealed a telephone order received on 05/12/15 at 3:10 p.m. for Clonidine 0.2 mg (milligrams) by mouth now. Further review revealed an order written on 05/12/15 at 6:07 p.m. for Norvasc 5 mg by mouth every day. Further review revealed Patient #3 had Abilify ordered for psychosis. Review of Patient #3's MARs revealed he was administered Clonidine 0.2 mg orally on 05/12/15 at 4:10 p.m. and Norvasc 5 mg orally at 8:30 a.m. on 05/13/15, 05/14/15, and 05/15/15 with no documented evidence of Patient #3's blood pressure on the MAR at the time of administration of the antihypertensive medications as required by hospital policy. Further review of the MARs revealed Patient #3 was administered Abilify 10 mg orally on 05/13/15, 05/14/15, and 05/15/15 at 8:30 a.m. with no documented evidence of Patient #3's blood pressure on the MAR at the time of administration as required by hospital policy. Review of Patient #6's physician orders revealed an order for Cozaar 50 mg orally daily for Hypertension, Risperdal 1 mg orally every morning for Psychosis, Risperdal 2 mg orally at bedtime for Psychosis, Celexa 40 mg orally at bedtime for Depression, Klonopin 1 mg orally at bedtime for Anxiety, Benztropine 0.5 mg orally twice a day for Anxiety, Topamax 50 mg orally twice a day for Psychosis, and Ativan 0.5 mg orally every 8 hours as needed for Anxiety. Review of Patient #3's MARs revealed she was administered Cozaar 50 mg orally at 8:30 a.m. on 05/19/15 with no documented evidence of her blood pressure. Further review revealed she was administered Topamax, Risperdal 2 mg, Celexa, Klonopin, and Benztropine at 11:00 p.m. on 05/18/15, Celexa at 11:30 a.m. on 05/19/15, Risperdal 1 mg and Benztropine at 8:30 a.m. on 05/19/15, and Benztropine at 8:00 p.m. on 05/19/15 with no documented evidence of her blood pressure on the MAR prior to administration of the medications as required by hospital policy. In an interview on 05/21/15 at 4:15 p.m., S2DON indicated the policy that required the blood pressure to be documented on the MAR prior to the administration of antihypertensive, antipsychotic, anti-anxiety, and antidepressant medications needed to be revised. She confirmed Patient #3 and Patient #6 did not have documented blood pressures on their MARs when their medications were administered. Review of Patient #6's physician orders revealed an order for "Insulin Sliding Scale Orders" with frequency of monitoring to be before meals and at bedtime. Review of Patient #6's MAR revealed no documented evidence of her CBG reading on 05/19/15 at 9:00 p.m. In an interview on 05/21/15 at 4;15 p.m., S2DON confirmed the CBG result was not documented on Patient #6's MAR. Review of the hospital policy titled "Laboratory Services", presented as a current policy by S2DON, revealed that laboratory services are provided by Lab A for routine labwork and Lab C for critical tests. Review of Patient #1's admit orders revealed an order to draw a Thyroid Profile, a Lipid Profile, and a Depakote Level. Review of Patient #1's lab results revealed the Depakote Level was not drawn until 02/01/15, 6 days after it was ordered (drawn by Lab A). The Thyroid Profile and Lipid Profile were drawn and resulted by Lab B. Review of Patient #revealed she was admitted on [DATE] and discharged on [DATE]. Further review revealed her admit diagnosis was Chronic Paranoid Schizophrenia with Behavioral Disturbances. Review of Patient #2's admit orders revealed orders to draw a CBC with diff (complete blood count with differential), CMP (comprehensive metabolic profile), RPR (Rapid Plasma Reagin), Thyroid profile, Lipid profile, Urine drug Screen, Urine Pregnancy Test, Urinalysis with culture and sensitivity if indicated, and Serum Osmolality. Review of Patient #2's lab results from Lab A revealed no documented evidence that results of a Urine Drug Screen, Urine Pregnancy Test, and Urinalysis were reported. In an interview on 05/21/15 at 4:15 p.m., S2DON could not explain why labs were not done timely and why some labs were drawn without a physician's order. Review of the hospital policy titled "Skin Assessment and care", presented as the current policy by S2DON when the skin assessment and wound care policies were requested, revealed that every patient was to have their skin assessed upon admission, once per shift, with any change in skin integrity, and weekly. As part of the initial nursing assessment, photographs of skin integrity issues will be obtained and filed in the wound care book with associated documentation. The RN or designee is to photograph all wounds, cuts, bruises, rashes, and other skin integrity problems and place photographs in the wound care book by patient. The size, color, appearance, and location of the wound is to be documented on the wound care assessment. Document skin condition each shift on the nursing note. Perform a full skin assessment on every patient every Saturday, and document such assessment in the wound care/skin assessment book. Document wound care on the MAR as ordered by the physician. Maintain all photos and wound care documentation and skin assessment as a part of the permanent medical record upon discharge. Review of Patient #2's medical record revealed she was admitted on [DATE] and discharged on [DATE]. Further review revealed her admit diagnosis was Chronic Paranoid Schizophrenia with Behavioral Disturbances. Review of Patient #2's "Multidisciplinary Progress Note" documented 04/29/15 at 10:00 a.m. revealed Patient #2 had a pressure ulcer to the coccyx that measured 0.8 by 0.6 by 0.1 cm (centimeters). It was cleaned with soap and water and a Duoderm was applied. Review of Patient #2's physician's orders revealed a telephone order on 04/29/15 at 10:00 a.m. to clean the wound to the coccyx every 48 hours with soap and water, rinse, pat dry, and apply Duoderm. Further review revealed the order included to perform wound assessment and measurement at the time of wound care every 48 hours. Review of Patient #2's MARs and nurses' notes revealed no documented evidence that her wound was assessed with measurements every 48 hours as ordered (no measurement after initial measurement), and there was no documented evidence that wound care was performed on 05/03/15. Further review of the entire medical record revealed no documented evidence that photographs were taken as required by hospital policy. In an interview on 05/18/15 at 10:05 a.m., S5RN indicated photographs of wounds are supposed to be taken every 2 days. She further indicated there's a form for pictures to be attached with a place for measurements. S5RN confirmed there were no photographs of Patient #2's coccyx pressure ulcer in her medical record. Based on record reviews and interviews, the hospital failed to ensure that the nursing staff developed and kept current an individualized nursing care plan for each patient as evidenced by having a delay in initiation of the nursing care plan (#3) and inaccurate assessment of goal achievement (#5) for 2 (#3, #5) of 3 (#3, #5, #6) current inpatient records reviewed for nursing care plans from a sample of 6 (#1 - #6). Review of the hospital policy titled "Nursing Services", presented as a current policy by S2DON (Director of Nursing), revealed that a nursing plan of care shall be developed based on identified nursing diagnoses and/or patient care needs and patient care standards, implemented in accordance with the Louisiana Nurse Practice Act, and shall be consistent with the plan of all other health care disciplines. Review of Patient #3's medical record revealed he was a [AGE] year old male admitted on [DATE] with a diagnosis of Schizoaffective Disorder, Bipolar type under PEC. Review of his History and Physical (H&P) documented by S10MD (Medical Doctor) on 05/12/15 at 6:07 p.m. revealed Patient #3 had a rash noted on his hands and diagnoses of Depression, Hypertension, GERD (Gastroesophageal Reflux Disease), Schizophrenia, & Scabies. Review of Patient #3's physician orders revealed an order on 05/12/15 at 6:07 p.m. for Elmite cream and Ivermectin orally for treatment of Scabies and an order on 05/12/15 at 6:35 p.m. for Contact Precautions. Review of Patient #3's "Integrated treatment Plan" revealed a nursing care plan for "Impaired Skin Integrity" was developed on 05/14/15, 2 days after Patient #3 was admitted and diagnosed with and treated for Scabies. Review of patient #5's medical record revealed he was a [AGE] year old male admitted on [DATE] with diagnosis of Major Depression, recurrent, severe with Suicidal Ideation. Further review revealed additional diagnoses included Polysubstance Dependence (Cocaine, Opiates), Type II Diabetes Mellitus, and Hepatitis C. He was discharged on [DATE]. Review of his physician orders revealed an order for Accuchecks before meals and at bedtime with regular Insulin Sliding Scale. Review of Patient #5's nursing care plan for "Alteration in Health Maintenance related to Blood Sugar" revealed his goal was to have improved blood sugars with no blood sugar greater than 140 for 3 consecutive days within 7 days. Further review revealed the goal was achieved on 05/16/15. Review of Patient #5's MARs (Medication Administration Record) revealed his blood sugar on 05/13/15 at 9:00 p.m. was 154 and 168 on 05/14/15 at 9:00 p.m. Further review revealed Patient #5's goal for blood sugar not being greater than 140 for 3 consecutive days was not met. In an interview on 05/21/15 at 4:15 p.m., S2DON offered no explanation for Patient #3's nursing care plan for skin integrity not being initiated timely and for the inaccuracy of Patient #5's goal achievement for blood sugars. Based on record reviews and interviews, the hospital failed to ensure a registered nurse (RN) assigned the nursing care of each patient to other nursing personnel in accordance with the patient's needs and the specialized qualifications and competence of the nursing staff as evidenced by having no documented evidence of the qualifications of the individuals training the nursing staff in crisis prevention interventions (CPI) for 9 (S2, S5, S8, S9, S12, S13, S14, S15, S26) of 10 (S2, S5, S8, S9, S12, S13, S14, S15, S18, S26) nursing staff personnel files reviewed for competency from a total of 24 employed nursing personnel. Based on record reviews and interviews, the hospital failed to ensure a contract was signed for each lab providing services to the hospital as evidenced by no documented evidence of the contract with Lab B signed by Lab B's representative and having no documented evidence of a contract with Hospital A for performing critical lab tests in accordance with the hospital's lab policies. Review of the hospital policy titled "Laboratory Services", presented as a current policy by S2DON (Director of Nursing), revealed that laboratory services are provided by Lab A for routine labwork and Lab C for critical tests. Review of the contracts for lab services presented by S2DON revealed contracts were in place with Lab A, Lab B, and Hospital A. There was no documented evidence of a contract with Lab C which was to perform critical tests as per hospital policy. Review of the contract with Lab B, effective 11/20/14, revealed no documented evidence that the contract had been signed by the representative of Lab B. Lab services were provided for Patient #1 by Lab B on 01/27/15. In an interview on 05/21/15 at 4:15 p.m., S2DON (interviewed in the absence of S1Administrator) could offer no explanation for the contract with Lab B not being signed by the representative of Lab B and for not having a contract with Lab C. She confirmed that the lab services policy was not indicative of what was in place currently in regards to lab services being provided. a) Delaying treatment by failing to implement physician orders for Patient #3 who was diagnosed with Scabies on 05/12/15 at 6:07 p.m. The hospital failed to administer physician-ordered Elmite Cream 5% until the night of 05/13/15, more than 24 hours after ordered (see findings at tag A0749). b) Failing to implement Contact Precautions per hospital policy and MD orders as evidenced by observations of Patient #3 not being confined to his room, no identification of Contact Precautions and type of personal protective equipment (PPE) required to treat Patient #3, and no observation of staff and the contracted housekeeper donning PPE when providing care and cleaning the patient's room. Patient #3 was observed on 05/14/15 at 2:34 p.m. on the outside patio with 2 other patients and 2 staff members. This had the potential to affect the health of 3 other admitted patients, all staff of the hospital, and the residents and staff at the attached nursing home where the contracted housekeeper is employed (see findings at tag A0749). The hospital presented a Corrective Action Plan to lift the Immediate Jeopardy on 05/18/15 at 1:15 p.m. Due to the Corrective Action Plan having no documented objective, specific plans for revision of hospital policies and procedures, development of the nurse-to-nurse report tool mentioned in the plan, development and implementation of PI (Performance Improvement) tracking monitors for communicable diseases and appropriate precautions, staff education, and a plan for how the infection control program would be managed until the hospital hired an experienced and qualified infection control officer, the Corrective Action Plan was not accepted. The hospital presented a second Corrective Action Plan to lift the I.J. on 05/21/15 at 3:30 p.m. The plan did not include objective, specific plans for monitoring the screening of patients for infectious and communicable diseases, evidence of treatment of patients who had come in contact with Patient #3, evidence of staff and contracted staff assessment for symptoms of Scabies and refusal of treatment, how the infection control program would be managed until the re-hired infection control officer obtained recent training on infection control, and how the infection control activities would be coordinated into the QAPI (quality assessment and performance improvement) plan. The Corrective Action Plan was not accepted. The I.J. remained in place as of the time of exit on 05/21/15 at 6:20 p.m. c) Failing to have updated infection control policies and procedures as evidenced by having no documented evidence that the hospital's infection control policies and procedures had been reviewed and revised as needed by the infection control officer since development of the policies on 08/01/13 (see findings in tag A0749). d) Failing to maintain a sanitary physical environment as evidenced by failure of staff to disinfect the chair, table, wall, and handrails touched by Patient #3 who was diagnosed with Scabies and had physician orders for Contact Precautions (see findings in tag A0749). i) Failure of staff to perform handwashing or to use alcohol-based hand sanitizer before and after patient contact and after removal of gloves as observed on 05/15/15 (several observations) and 05/18/15. ii) Failure to develop a system to identify patients known to be colonized or infected with a targeted MDRO (multi-drug resistant organism) and for notification of receiving healthcare facilities and personnel prior to transfer of such patient between facilities. iii) Failure to develop a policy to ensure that patients identified as colonized or infected with target MDROs are placed on Contact Precautions as evidenced by having the hospital's policy addressing only MRSA (Methicillin-resistant Staphylococcus aureus). iv) Failure to have alcohol-based hand rub readily accessible and placed in appropriate locations as evidenced by having wall-mounted alcohol-based hand rubs in the nursing station and physician's exam room and 2 partially-filled small containers on alcohol-based hand rub locked in a drawer in the dining room. There was no documented evidence that the hospital had developed a plan for alcohol-based hand rubs to be readily accessible to staff, since wall-mounted alcohol-based hand rubs were limited due to risk factors in the psychiatric hospital. v) Failure to develop a plan to ensure PPE supplies used for Standard Precautions were available and located near the point of use as evidenced by having the gowns, gloves, mouth, eye, nose, and face protection stored in the physician's exam room which would not be accessible if the physician was examining a patient. vii) Failure to ensure that reusable noncritical patient care devices, such as blood pressure cuffs and oximeter probes, are disinfected on a regular basis and when visibly soiled as evidenced by failure of staff to clean blood pressure cuffs between patient use. viii) Failure of staff to follow manufacturer's guidelines for cleaning point of care devices as evidenced by observation of S13LPN (Licensed Practical Nurse) cleaning the glucometer with a wet paper towel and soap rather than a disinfectant wipe. ix) Failure to perform active surveillance of handwashing and use of PPE as evidenced by having no documented evidence of handwashing surveillance from 01/01/15 through 05/21/15 (see findings in tag A0749). 2) Failing to ensure it designated a qualified and experienced infection control officer after the resignation of S12LPN (Licensed Practical Nurse) on 02/13/15. The hospital failed to have an infection control officer qualified through education, ongoing training, experience, or certification from 02/13/15 through the completion of the survey on 05/21/15 (see findings in tag A0748). 3) Failing to ensure the chief executive officer, the medical staff, and the director of nursing assured the hospital-wide QAPI program addressed problems identified by the infection control officer and was responsible for the implementation of successful corrective action plans in affected problem areas as evidenced by failure to have documented evidence of the collection of, tracking, and analysis of infection control data with corrective action plans for identified problems from 01/01/15 through the time the survey was completed on 05/21/15 at 6:20 p.m. (see findings in tag A0756). Based on record reviews and interviews, the hospital failed to ensure it designated a qualified and experienced infection control officer after the resignation of S12LPN (Licensed Practical Nurse) on 02/13/15. The hospital failed to have an infection control officer qualified through education, ongoing training, experience, or certification from 02/13/15 through the completion of the survey on 05/21/15. Review of the "Full Time Employees" list, presented by S2DON (Director of Nursing) when a list of all staff with job title and date of hire was requested, revealed no documented evidence of a staff member designated as the infection control officer. Further review revealed no documented evidence that S17LPN (Licensed Practical Nurse) was listed as an employee. In an interview on 05/14/15 at 1:35 p.m., S2DON indicated S17LPN had been hired about the beginning of May as the infection control officer but had not finished her orientation or skills checklist yet. She further indicated S17LPN was currently on leave after having had an accident. S2DON confirmed S17LPN had no education, training, or experience in infection control. In an interview on 05/20/15 at 11:20 a.m., S6LPN indicated S1Admin (Administrator) had asked her to be the infection control officer and asked her to sign a job description as such, "so he'd have it on paper." She further indicated didn't accept the job of infection control officer, because she knew "they wouldn't allow her to do what's necessary to comply." In an interview on 05/20/15 at 3:00 p.m., S12LPN indicated her last day of work before returning on 05/20/15 was 02/13/15. She further indicated she was the infection control officer at the time of her resignation. In an interview on 05/21/15 at 8:05 a.m., S1Admin indicated S12LPN is now the infection control officer as of 05/20/15. He further indicated she had been the previous infection control officer, and her personnel file wasn't terminated when she left in February. In an interview on 05/21/15 at 12:50 p.m., S12LPN indicated she had prior infection control experience but had not received any additional training or education in infection control for the past 2 years. 1) Delay in treatment by failing to implement physician orders for Patient #3 who was diagnosed with Scabies on 05/12/15 at 6:07 p.m. The hospital failed to administer physician-ordered Elmite Cream 5% until the night of 05/13/15, more than 24 hours after ordered. 2) Failing to implement Contact Precautions per hospital policy and MD orders as evidenced by observations of Patient #3 not being confined to his room, no identification of Contact Precautions and type of personal protective equipment (PPE) required to treat Patient #3, and no observation of staff and the contracted housekeeper donning PPE when providing care and cleaning the patient's room. Patient #3 was observed on 05/14/15 at 2:34 p.m. on the outside patio with 2 other patients and 2 staff members. This had the potential to affect the health of 3 other admitted patients, all staff of the hospital, and the residents and staff at the attached nursing home where the contracted housekeeper is employed. 3) Failing to ensure it designated a qualified and experienced infection control officer to implement its infection control program after the resignation of S12LPN (Licensed Practical Nurse) on 02/13/15. The hospital failed to have an infection control officer qualified through education, ongoing training, experience, or certification from 02/13/15 through the completion of the survey on 05/21/15. 4) Failing to have updated infection control policies and procedures as evidenced by having no documented evidence that the hospital's infection control policies and procedures had been reviewed and revised as needed by the infection control officer since development of the policies on 08/01/13. a) Failure to obtain physician orders for Contact Precautions for 1 (#1) of 1 closed medical record reviewed with physician orders for Contact or Isolation Precautions from a sample of 6 (#1 - #6) patients. b) Failure of staff to disinfect the chair, table, wall, and handrails touched by Patient #3 who was diagnosed with Scabies and had physician orders for Contact Precautions. a) Failure of staff to perform handwashing or to use alcohol-based hand sanitizer before and after patient contact and after removal of gloves as observed on 05/15/15 (several observations) and 05/18/15. b) Failure to develop a system to identify patients known to be colonized or infected with a targeted MDRO (multi-drug resistant organism) and for notification of receiving healthcare facilities and personnel prior to transfer of such patient between facilities. c) Failure to develop a policy to ensure that patients identified as colonized or infected with target MDROs are placed on Contact Precautions as evidenced by having the hospital's policy addressing only MRSA (Methicillin-resistant Staphylococcus aureus). d) Failure to have alcohol-based hand rub readily accessible and placed in appropriate locations as evidenced by having wall-mounted alcohol-based hand rubs in the nursing station and physician's exam room and 2 partially-filled small containers on alcohol-based hand rub locked in a drawer in the dining room. There was no documented evidence that the hospital had developed a plan for alcohol-based hand rubs to be readily accessible to staff, since wall-mounted alcohol-based hand rubs were limited due to risk factors in the psychiatric hospital. e) Failure to develop a plan to ensure PPE supplies used for Standard Precautions were available and located near the point of use as evidenced by having the gowns, gloves, mouth, eye, nose, and face protection stored in the physician's exam room which would not be accessible if the physician was examining a patient. g) Failure to ensure that reusable noncritical patient care devices, such as blood pressure cuffs and oximeter probes, are disinfected on a regular basis and when visibly soiled as evidenced by failure of staff to clean blood pressure cuffs between patient use. h) Failure of staff to follow manufacturer's guidelines for cleaning point of care devices as evidenced by observation of S13LPN (Licensed Practical Nurse) cleaning the glucometer with a wet paper towel and soap rather than a disinfectant wipe. Review of Patient #3's medical record revealed he was a [AGE] year old male admitted on [DATE] with a diagnosis of Schizoaffective Disorder, Bipolar type under PEC. Review of his History and Physical (H&P) documented by S10MD (Medical Doctor) on 05/12/15 at 6:07 p.m. revealed Patient #3 had a rash noted on his hands and diagnoses of Depression, Hypertension, GERD (Gastroesophageal Reflux Disease), Schizophrenia, & Scabies. Further review revealed S10MD's treatment plan included Elmite 5% (per cent) every day for 7 days and Ivermectin 0.2 mg/kg (milligrams per kilogram) by mouth on day 1, 2, 8, 9, and 15. Review of Patient #3's physician orders revealed an order written by S10MD on 05/12/15 at 6:07 p.m. for Elmite 5% Cream apply for 7 days and Ivermectin 3 mg tablet, give 5 tablets, by mouth on day 1, 2, 8, 9, and 15. Further review revealed a clarification telephone order from S10MD on 05/13/15 at 1:30 p.m. to administer Elmite 5% Cream for 2 days. Review of Patient #3's MAR (Medication Administration Record) revealed no documented evidence that Patient #3 was administered Elmite Cream 5% on 05/12/15. Further review revealed the first administration of Elmite Cream 5% was at 8:00 p.m. 05/13/15, more than 25 hours after the physician order was written. Review of Patient #3's "Multidisciplinary Progress Note" on 05/18/15 revealed documentation by S14RN of "5/12/15 22:00 (10:00 p.m.) Nsg (nursing) Late Entry Late Entry Permethrin Crm (Cream) 5% administer as order to head to toe." There was no documented evidence of the date and time the late entry was documented by S14RN. In an interview on 05/14/15 at 3:25 p.m., S5RN (Registered Nurse) confirmed Patient #3's MAR had no documentation that Elmite Cream was administered on the night of 05/12/15. She indicated that she spoke with the night nurse of 05/13/15 this morning during report, and S18RN (night nurse) indicated she had administered Elmite cream the previous night. In an interview on 05/18/15 at 9:15 a.m., S14RN indicated he administered Elmite Cream to Patient #3 on the night of the 05/12/15 but didn't document it anywhere. He further indicated that night was his first time out of orientation, and he hadn't given medications in the LPN's role for about 7 years. Review of the hospital titled "Management of Outbreaks (Lice/Scabies)", originated August 2013 and presented as a current policy by S2DON, revealed that the patient suspected of having Scabies would be immediately placed in Contact Isolation. The patient's room door will be kept closed. Transmission-based protocols (contact precautions) will be followed until completion of treatment and 8 hours thereafter. All contaminated towels/linen are to be handled with care with the employee using appropriate PPE such as gloves and gowns. Review of Patient #3's physician orders revealed an order written by S10MD on 05/12/15 at 6:07 p.m. for Elmite 5% Cream apply for 7 days and Ivermectin 3 mg tablet, give 5 tablets, by mouth on day 1, 2, 8, 9, and 15. Further review revealed a clarification telephone order from S10MD on 05/13/15 at 1:30 p.m. to administer Elmite 5% Cream for 2 days. Further review revealed a telephone order received by S9RN (Registered Nurse) from S10MD on 05/12/15 at 6:35 p.m. for Contact Precautions and no roommate. In an interview on 05/14/15 at 1:35 p.m., S2DON indicated Patient #3 was ordered to be on contact precautions due to having Scabies on a previous admission. She further indicated he is not confined to his room. 3) Failing to ensure it designated a qualified and experienced infection control officer to implement its infection control program after the resignation of S12LPN on 02/13/15: Review of the "Full Time Employees" list, presented by S2DON (Director of Nursing) when a list of all staff with job title and date of hire was requested, revealed no documented evidence of a staff member designated as the infection control officer. Further review revealed no documented evidence that S17LPN (Licensed Practical Nurse) was listed as an employee. Review of the "Infection Control P & P (Policies and Procedures)" manual, presented by S2DON revealed it was dated 08/01/13. Further review revealed no documented evidence that the policies and procedures had been reviewed and revised by the infection control officer since the policies and procedures were developed. In an interview on 05/21/15 at 12:50 p.m., S12LPN (Licensed Practical Nurse) indicated she was the designated Infection Control Officer as of 05/20/15. She further indicated she had previously been the Infection Control Officer at the hospital from 12/23/13 until 02/13/15. She indicated that she had not revised any infection control policies since she had been hired in 2013. S12LPN indicated the infection control policies and procedures needed revisions, because the policies only relate to MRSA and should reference MRDOs. Review of the CDC's "2007 Guideline for Isolation Precautions: Preventing Transmission of Infectious Agents in Healthcare Settings" revealed the recommendation to don the indicated PPE upon entry into the patient's room for patients who are on Contact and/or Droplet Precautions since the nature of the interaction with the patient cannot be predicted with certainty and contaminated environmental surfaces are important sources for transmission of pathogens. Further review revealed mites from a Scabies-infested patient are transferred to the skin of a caregiver while he/she is having direct ungloved contact with the patient's skin. Review of Patient #1's medical record revealed he was a [AGE] year old male admitted on [DATE] and discharged on [DATE]. Further review of his H&P documented by S10MD 01/26/15 at 6:33 p.m. revealed Patient #1had Scabies. There was no documented evidence that Patient #1 was ordered to be on Contact Precautions. 2) Attempt to ensure that all persons who receive treatment have the clothing and bedding they used anytime during the 3 days before treatment machine-washed and dried using the hot water and high heat cycles. Clean the room of patients with crusted scabies regularly to remove contaminating skin crusts and scales that can contain many mites. Observation on 05/15/15 at 8:40 a.m. revealed Patient #3 leaving the Dining/Activity Room and opening the door to and entering his room. Observation on 05/15/15 at 8:45 a.m. revealed S15MHT (Mental health tech) touching the door to Patient #3's room with ungloved hands. Further observation revealed S15MHT did not sanitize or hand wash after touching the door. Further observation revealed no one cleaned the chair and table that Patient #3 used in the Dining/Activity Room after he left. Observation on 05/15/15 at 8:46 a.m. revealed Patient #3 exited his room and went to sit in the same chair in the Dining/Activity Room. He leaned on the wall and touched the handrail in the hall across from the nursing station. Observation revealed S15MHT told him "quit touching everything." Further observation revealed no one cleaned the wall or handrail that Patient #3 had touched. In an interview on 05/15/15 at 9:45 a.m., S10MD confirmed Patient #3's Scabies is Crusted Scabies. In an interview on 05/15/15 at 9:50 a.m., S15MHT confirmed he didn't wipe the wall and handrail that was touched by Patient #3 until approximately 10 minutes after, upon his return from a 10 minute smoke break with patients. He confirmed the chair and table that Patient #3 is assigned in the Dining/Activity Room hasn't been disinfected since breakfast. He further indicated it isn't disinfected each time after Patient #3 uses it. He confirmed that he touched the door handle to Patient #3's room with his bare hands after Patient #3 had touched it, and he (S15MHT) did not perform hand hygiene. Review of the hospital policy titled "Infection Control P&P", presented as the current infection control policies and procedures by S12LPN, revealed that personnel should wash their hands thoroughly and promptly between patients to reduce contamination. Further review revealed handwashing is also required after the use of restroom facilities, after break or lunch, after any nursing procedure, or any time the hands become soiled. 8) Decontaminate hands after removing gloves. Observation on 05/15/15 at 10:40 a.m. revealed S13LPN perform an Accucheck on Patient #5. While wearing contaminated gloves (after obtaining the blood specimen) S13LPN touched the test strip container to close it and placed it in the glucometer case. Further observation revealed S13LPN then removed her gloves and did not immediately perform hand hygiene. She then carried the glucometer in one hand and the sharps container in the other hand, unlocked and opened the Medication Room door, washed the glucometer with a wet paper towel with soap on it, rinsed the glucometer, and then washed her hands. Observation on 05/18/15 at 8:20 a.m. revealed S2DON administering patients' medications. Further observation revealed S2DON touched the first patient's arm to check the armband and did not perform hand hygiene before continuing medication administration for the second patient. S2DON then touched Patient #3's hand to check his armband and administered his medications with no observation of S2DON performing hand hygiene after administering Patient #3's medications. In an interview on 05/21/15 at 12:50 p.m., S12LPN indicated she was the designated Infection Control Officer. She confirmed the above situations required hand hygiene to be performed after removing gloves and between patient contact during medication administration. She confirmed that she had no documented evidence to present of hand hygiene surveillance for the current calendar year. Review of the hospital policy titled "Infection Control P&P", presented as the current infection control policies and procedures by S12LPN, revealed no documented evidence that a policy and procedure had been developed and implemented to identify patients known to be colonized or infected with a targeted MDRO and for notification of receiving healthcare facilities and personnel prior to transfer of such patient between facilities. In an interview on 05/21/15 at 12:50 p.m., S12LPN indicated she was the designated Infection Control Officer. She further indicated the hospital did not have a policy and procedure or system in place to identify patients known to be colonized or infected with a targeted MDRO and for notification of receiving healthcare facilities and personnel prior to transfer of such patient between facilities. Review of the hospital policy titled "Infection Control P&P", presented as the current infection control policies and procedures by S12LPN, revealed no documented evidence that a policy and procedure had been developed and implemented to ensure that patients identified as colonized or infected with target MDROs are placed on Contact Precautions as evidenced by having the hospital's policy addressing only MRSA. In an interview on 05/21/15 at 12:50 p.m., S12LPN indicated she was the designated Infection Control Officer. She further indicated the hospital's infection control policies and procedures needed revision, because the only MDRO addressed in its policies and procedures was MRSA. Observation on 05/14/15 at 9:55 a.m. revealed hand sanitizer was mounted on the wall in the physician exam room and inside the nursing station. Further observation revealed the Dining/Activity room had a locked drawer with 2 opened bottles of hand sanitizer. Further observation at 9:57 a.m. in the Storage Room revealed no observation of individual hand sanitizer. In an interview on 05/21/15 at 12:50 p.m., S12LPN indicated she was the designated Infection Control Officer. She further indicated that hand sanitizer can't be mounted on the walls that are accessible to the psychiatric patients (due to safety issues). She further indicated that hand sanitizer is locked in a drawer in the Activity/Dining room for staff use. S12LPN confirmed the hospital did not have a system developed for easy accessibility of hand sanitizer by staff, since wall-mounted hand sanitizers were not able to be located throughout the hospital. Observation on 05/14/15 at 8:15 a.m. revealed a plastic rolling cart in the Physician Exam Room had red biohazard bags & bouffant hair covers, 1 open and 1 unopened bag of yellow isolation gowns, and one full box latex gloves. A second cart in the room had an opened and partially-filled box of face masks, 1 unopened box of gloves, and 1 opened bag of yellow isolation gowns. In an interview on 05/21/15 at 12:50 p.m., S12LPN indicated she was the designated Infection Control Officer. When asked how the staff were to access PPE contained in the Physician Exam Room if PPE was needed when the physician was examining the patient, S12LPN indicated the staff was supposed to put a red biohazard bag in their pocket when they went to a patient's room who was on Contact Precautions. She confirmed the hospital did not have a system in place to assure that all staff had PPE easily accessible to them for use with a patient on Contact Precautions. Based on record reviews and interviews, the hospital failed to ensure drugs and biologicals were administered in accordance with the orders of the physician for 1 (#3) of 3 (#3, #5, #6) current inpatients and 1 (#2) of 3 (#1, #2, #4) closed medical records from a total sample of 6 patients. Review of the hospital policy titled "Medication Administration", presented as a current policy by S2DON (Director of Nursing), revealed that all medications required an order which is written on the physician's order form and must contain the name of the medication, dose, time to be administered, route, reason/indication the medication is prescribed, and the specific time the first dose is to be administered. Further review revealed no documented evidence of the time interval after receipt of the order for administration of medications ordered to be given "now." Review of Patient #3's physician's orders revealed an order on 05/12/15 at 3:10 p.m. to administer Clonidine 0.2 mg orally now. Further review revealed an order on 01/12/15 at 6:07 p.m. for Elmite 5% (per cent) Cream to be applied every day for 7 days and Ivermectin 3 mg (milligram) tablet 5 by mouth on day 1, 2, 8, 9, and 15. Further review revealed a clarification telephone order from S10MD on 05/13/15 at 1:30 p.m. to administer Elmite 5% Cream for 2 days. Review of Patient #3's MARs (Medication Administration Record) revealed he received Clonidine 0.2 mg orally on 05/12/15 at 4:10 p.m., 1 hour after it was ordered by the physician to be given now at 3:10 p.m. Further review of the MAR on 05/14/15 revealed Elmite Cream was not applied on 05/12/15 as ordered. Further review it was applied on 05/13/15 at 8:00 p.m. and on 05/14/15 at 8:00 p.m. In an interview on 05/18/15 at 9:15 a.m., S14RN (Registered Nurse) indicated he applied Elmite Cream to Patient #3 on the night of 05/12/15, but he didn't document the administration. He confirmed that by failing to document the administration a medication error occurred, because Patient #3 had Elmite Cream applied for 3 days rather than 2 days as ordered by the physician. In an interview on 05/21/15 at 4:15 p.m., S2DON confirmed the medication administration policy did not address the time interval for administering a medication ordered to be given "now" (after receipt of the order). 05/01/15 at 9:40 a.m. - Increase Saphris to 10 mg SL twice a day. Review of patient #2's MARs revealed she received Saphris 5 mg SL on 05/01/15 at 8:30 a.m. There was no documented evidence that an additional 5 mg SL was administered at 9:40 a.m. when the order was received to increase the dose to 10 mg SL twice a day. In an interview on 05/21/15 at 4:15 p.m., S2DON indicated an additional Saphris should have been administered when the order was received to increase it. 1) Failing to implement its Medical Staff By-laws and Rules and Regulations for delinquent medical records as evidenced by having 59 delinquent medical records not completed within 30 days after discharge and the physician not being suspended of his admitting privileges as required by the Medical Staff By-laws and Rules and Regulations for 1 (S11) of 2 credentialed psychiatrists (see findings in tag A0438). 2) Failing to develop a system for coding and indexing medical records that allowed timely retrieval by diagnosis (see findings in tag A0440). Based on record reviews and interviews, the hospital failed to implement its Medical Staff By-laws and Rules and Regulations for delinquent medical records as evidenced by having 59 delinquent medical records not completed within 30 days after discharge and the physician not being suspended of his admitting privileges as required by the Medical Staff By-laws and Rules and Regulations for 1 (S11) of 2 credentialed psychiatrists. Review of the hospital's "Rules and Regulations For The Professional Medical Staff", presented as the current rules and regulations by S1Admin (Administrator), revealed that the attending physician shall be responsible for the preparation of a complete and legible medical record for each patient. Further review revealed each medical record shall be completed within 30 days after the discharge of the patient or the record becomes delinquent. On a continuous basis, the medical record director shall review incomplete records. At this time, any physician who has any delinquent charts shall be so notified by phone. If the records are still incomplete two weeks after being notified, he shall automatically suffer suspension of admitting privileges. He shall be notified of such suspension in writing by the medical record director. Review of the Medical Staff By-laws and Rules and regulations revealed no documented evidence of a procedure to administratively close incomplete medical records. Review of a list of "Charts for Administrative Closure", presented by S3MR Coord (Medical Record Coordinator) on 05/14/15 at 4:15 p.m., revealed a list of 57 patients who had been discharged between 01/08/15 to 03/29/15. Further review revealed the column titled "Admitting Physician" contained 43 records for S11Psychiatrist and 14 records for S25APRN (Advanced Practice registered Nurse) (who has a collaborative practice agreement with S11Psychiatrist). Review of the "Medical Record Delinquent Detail Report", presented by S3MR Coord on 05/14/15, revealed Patient R1 was discharged on [DATE] and was awaiting signatures of an LPN (Licensed Practical Nurse) on a MAR (Medication Administration Record) and S23MD (Medical Doctor) on his progress note. Further review revealed Patient R2 was discharged on [DATE] and was awaiting the signature of S10MD on a physician order. In an interview on 05/14/15 at 10:45 a.m., S3MR Coord indicated she was hired on 02/17/15. She further indicated the Medical Record Department was "backed up" when she was hired, and some medical records were "administratively closed" by the hospital. When asked what she meant by "administratively closed", she indicated the charts were tagged, closed, and had to go before the Governing Body for approval. S3MR Coord indicated she didn't know how many charts were "administratively closed." She further indicated her supervisor S6LPN had a list, but she (S6LPN) was on vacation this week. S3MR Coord presented documents during the interview of incomplete charts. She indicated none of the charts were delinquent. In an interview on 05/14/15 at 12:10 p.m., S3MR Coord presented documentation of medical records awaiting staff signatures. During the interview, review of the documentation revealed 5 patient records were delinquent, as the patient had been discharged greater than 30 days. Further review revealed 2 of the 5 records were awaiting the signature of S10MD. S3MR Coord indicated the medical records weren't considered delinquent, because the physician had signed them, and they were only awaiting signatures by staff members. S3MR Coord confirmed that she didn't know that any record incomplete, whether waiting for physicians' signatures or signatures of staff members, after 30 days was delinquent. She further indicated she didn't know it was delinquent if only staff signatures were needed. In an interview on 05/14/15 at 2:10 p.m., S2DON (Director of Nursing) indicated she had texted S6LPN, S3MR Coord's supervisor, who was on vacation to ask about the "administratively closed" medical records. She further indicated S6LPN indicated she didn't know how many were closed, but they had not been "administratively closed". She further indicated that S6LPN indicated the records needed to be audited with a list submitted to the Medical Executive Committee (MEC) and Governing Body for approval before they could be "administratively closed." In an interview on 05/18/15 at 10:55 a.m., S3MR Coord indicated she didn't notify S11Psychiatrist of his more recent medical records that were delinquent due to waiting for staff signatures. She further indicated she usually speaks verbally to S11Psychiatrist and doesn't have any documentation to present when she spoke with him. She further indicated she didn't have any documentation of her conversations with S25APRN, because S25APRN "just comes by and signs charts." S3MR Coord indicated that S11Psychiatrist had not been suspended since she's been hired on 02/17/15. She further indicated she had spoken with the former Administrator who had created a document to address suspension, but it was never used. She further indicated she had created a letter, but it was never used. S3MR Coord indicated she couldn't explain why the documented created by the former Administrator and the letter she created were never implemented. In an interview on 05/21/15 at 11:40 a.m., S11Psychiatrist confirmed he is the hospital's medical Director. When asked about his delinquent medical records, he indicated that he signs everything that's brought to him when he's at the hospital. He further indicated he knew that at some point the hospital was trying to get orders signed. He further indicated that he was surprised that he had not been getting requests for signatures recently. S11Psychiatrist indicated he was surprised to hear that he's delinquent with medical records to the point of being suspended. he further indicated no one had informed him that he currently had delinquent medical records. S11Psychiatrist indicated he thought it had been about 3 months since the last MEC meeting, and MEC meetings were supposed to be held quarterly (Medical Staff By-laws revealed MEC meetings were to be held monthly). Based on interviews, the hospital failed to develop a system for coding and indexing medical records that allowed timely retrieval by diagnosis. In an interview on 05/14/15 at 1:35 p.m., S2DON (Director of Nursing) was asked if she could provide a list of patients treated in the last year who had wounds. She indicated she could not pull patients by diagnosis. In an interview on 05/20/15 at 3:35 p.m., S3MR Coord (Medical Records Coordinator) confirmed the hospital did not have a system in place for coding and indexing medical records that allowed timely retrieval by diagnosis. Based on record reviews and interview, the hospital failed to ensure the chief executive officer, the medical staff, and the director of nursing assured the hospital-wide quality assessment and performance improvement (QAPI) program addressed problems identified by the infection control officer and was responsible for the implementation of successful corrective action plans in affected problem areas as evidenced by failure to have documented evidence of the collection of, tracking, and analysis of infection control data with corrective action plans for identified problems. No documented evidence of QAPI or Infection Control meeting minutes were presented for the calendar year of 2015 as of the time the survey was completed on 05/21/15 at 6:20 p.m. Review of the hospital policy titled "Infection Control P & P", presented as a current policy by S12LPN (Licensed Practical Nurse), revealed that the Infection Control Program is reported on a monthly basis to Performance Improvement/Medical Staff Committee. Information in this report will include, but is not limited to, results related to surveillance, emerging pathogens, public health bulletins or issues, CDC (Centers for Disease Control and Prevention) recommendations or alerts, quality improvement issues, results of clinical care surveillance rounds, and special studies/reports. Review of the hospital policy titled "Performance Improvement Plan", presented as a current policy by S2DON (Director of Nurses), revealed the program included infection surveillance/prevention/control. Further review revealed final responsibility for performance improvement in the provision of quality services rests with its Chief Executive Officer (CEO). The CEO will meet with the Senior Management Committee at least quarterly to review all reports concerned with the overall Performance Improvement activities. The Performance Improvement Committee meets at least monthly to review and analyze data from monthly Infection Surveillance, Prevention, and Control activities. Review of Governing Body meeting minutes conducted on 08/06/14, 02/11/15, 03/02/15, and 03/27/15 revealed no documented evidence that Infection Control or QAPI was discussed during any of the meetings. In an interview on 05/20/15 at 11:20 a.m., S6LPN indicated she had been employed in October/November 2014 as the Utilization review Nurse. She further indicated since then she had sat in one QAPI meeting and nothing related to PI (Performance Improvement) was discussed. She indicated it was more like a social meeting. S6LPN indicated she has no PI data or meeting minutes to present from the previous person doing PI (S12LPN); she can look for it, but nothing was given to her. She further indicated she had never signed a job description as being responsible for QAPI. In an interview on 05/20/15 at 3:00 p.m., S12LPN indicated her last day of work was 02/13/15. She further indicated she didn't have any QAPI or Infection Control data or meeting minutes from 01/01/15 through 05/20/15 to present to the surveyor. When asked about the January 2015 data, she indicated her January monitors "were on my desk and I don't know what happened to them".
2019-04-21T20:45:15Z
http://www.hospitalinspections.org/report/23828
Insolvency proceedings, including bankruptcy proceedings, reorganisation proceedings with self-administration and reorganisation proceedings without self-administration, are governed by the Austrian Insolvency Code (the Insolvency Code). In addition to the Insolvency Code, the Business Reorganisation Law of 1997 (the Business Reorganisation Law) governs a specific form of ‘reorganisation’ supporting the restructuring of a solvent debtor’s business. ‘Reorganisations’ under the Business Reorganisation Law are not insolvency proceedings and do not affect creditors’ rights. In general, both individuals and legal entities can be subject to insolvency proceedings. This includes general partnerships, limited partnerships, professional partnerships, professional limited partnerships and European economic interest groupings as well as a deceased person’s estate. The Supreme Court has ruled that even municipalities may be subject to insolvency proceedings. Owing to a lack of legal standing, civil partnerships, silent partnerships and cartels cannot enter into insolvency proceedings. Only their partners may be subject to insolvency proceedings. Reorganisation proceedings with or without self-administration and reorganisations under the Business Reorganisation Law do not apply to credit institutions, insurance companies and pension funds. For such entities, special provisions set out in the Banking Act, the Insurance Company Supervision Act and the Pension Fund Act apply. The Business Reorganisation Law also does not apply to investment service companies, financial institutions and leasing companies. The following assets are excluded from insolvency proceedings and are exempt from claims of creditors: inheritances, legacies and gifts to the extent not accepted by the insolvency administrator; any assets that the insolvency court decides to release from the estate; claims arising in the context of legal proceedings asserted by the debtor and assets in the possession of the debtor the restitution of which is subject to legal proceedings to the extent the insolvency administrator does not enter into such proceedings; all rights that are incapable of being transferred to a person other than the debtor; and, when the debtor is a natural person, a certain amount of monetary funds that is granted to the debtor for his or her living expenses. Investments of the Republic of Austria in partially or entirely nationalised companies are in most cases administered via the Austrian State and Industrial Holding Company (ÖBIB), an Austrian limited liability company that holds the shares in these companies. The ÖBIB is the successor of the former Austrian State Industrial Holding Stock Corporation (ÖIAG). This had been turned into ÖBIB in early 2015 by way of a form-changing transformation pursuant to the Austrian Stock Corporation Act. Other shareholdings in government-owned enterprises (eg, the Federal Railways Company) are directly held by the Republic of Austria and administered by the government. Because all these nationalised companies and government-owned enterprises are set up under Austrian private law (most often in the form of a limited liability company or a stock corporation), there are no specific procedures as to the insolvency of these enterprises. Consequently, the creditors’ remedies are also the same as in ordinary insolvency proceedings. Statutory bodies under public law (eg, municipalities, cities with their own charter, federal states and the Republic of Austria itself) may also become insolvent. This is generally accepted and derived from their general legal capacity. Therefore, in principle, in the case of an insolvency of a statutory body with general legal capacity, the Austrian Insolvency Code will apply. On 1 January 2015, the Austrian Federal Act on the Recovery and Resolution of Banks (BaSAG) which implemented Directive 2014/59/EU on the recovery and resolution of credit institutions and investment firms (BRRD) entered into force. The BaSAG only applies to credit institutions, financial institutions that are subject to supervisory consolidation, and financial holding companies that are part of an Austrian credit institution group. Its main principles are the winding down of assets or the recovery of a bank without severe impact on its value, the protection of taxpayers and the equal treatment of creditors of a credit institution that is subject to bail-in measures (‘no creditor worse off than in insolvency’). The BaSAG provides for all early intervention measures and resolution tools as the BRRD, such as the production of recovery and resolution plans by institutions, additional supervisory powers for the Austrian financial market authority (FMA) as national resolution authority to intervene at an early stage and the entrusting of the FMA with necessary resolution powers and tools such as the sale of business or shares, the setting up of a bridge institution, the separation of assets and the bail-in of shareholders and creditors of a failing institution. Like the BRRD, the BaSAG aims at providing an alternative for credit institutions to standard insolvency proceedings. However, a credit institution can at the same time be subject to both resolution measures under the BaSAG and insolvency proceedings under the Austrian Insolvency Code. Importantly, the BaSAG modifies the usual ranking of creditors in the course of insolvency proceedings because certain claims (ie, of ensured deposit holders) are satisfied with priority. Payments of subordinated claims will only be made if the first ranking creditors have been fully satisfied. Insolvency proceedings are generally conducted by the competent provincial court (in Vienna, the Commercial Court) in the area where the debtor’s business is located at the time of filing for insolvency. Failing this, for example when the debtor is a private person, proceedings are conducted by the court of the place where the debtor has its permanent residence, its branch office or any assets. In the case of a natural person applying for insolvency proceedings, the competent district court is involved. Austrian law distinguishes between three types of court orders: those that can be appealed with an autonomous recourse, those that can only be appealed together with another appealable decision and those that cannot be appealed at all. The remedy against court orders is always a ‘recourse’. The general rules according to the Civil Procedures Act apply. content (declaration of appeal, reason for appeal and claim). In insolvency matters, the appellant is allowed to bring new facts or evidence during recourse proceedings, provided that they already existed at the time when the appealed decision was made. Recourses do not have a delaying effect on the enforceability of the court order. However, the court cannot alter the appealed decision to the detriment of the appellant. This means that, as a worst-case scenario for the appellant, the recourse gets rejected. If the requirements of a recourse are met, the appellant is entitled to bring an appeal. As a prerequisite to the decision of the appellate court, the trial court where the appeal was submitted decides on the admission of the appeal. After admission, the appeal is submitted to the appellate court, which also has the right to reject the recourse. when the plaintiff has sufficient real estate (secured) assets. Under Austrian law, the term ‘voluntary liquidation’ of a company is used to refer to a company being dissolved by its shareholders voluntarily according to its corporate charter, outside the scope of insolvency proceedings. In such a case, all creditors’ debts must be fully satisfied before the liquidation can be completed. The following does not deal with ‘voluntary liquidation’ in the strict Austrian sense of the word but with the true situation when the directors of a company (as opposed to its creditors) can, and are under certain circumstances required to, file for insolvency proceedings. A debtor is required to initiate a voluntary liquidation if the insolvency test is met (see question 15). Following the application for opening insolvency proceedings, the court examines the application and decides whether the debtor meets the insolvency test. If this is the case, the court will open insolvency proceedings immediately. Once the court has formally opened insolvency proceedings (with the exception of reorganisation proceedings with self-administration), the right to make any dispositions with respect to the insolvency estate and the administration thereof passes from the debtor to the insolvency administrator appointed by the court. In such case, only the insolvency administrator is entitled to act on behalf of the insolvent’s estate. Transactions concluded by the debtor after the opening of insolvency proceedings are void with respect to the creditors. If the court makes an order for reorganisation proceedings with self-administration, the debtor retains the right to make dispositions with respect to the insolvency estate. However, it will be supervised by a court-appointed reorganisation administrator. If the conditions for the opening of insolvency proceedings are met (see question 15) or there is a real threat of the debtor’s inability to pay debts as they fall due (‘pending illiquidity’), the debtor may apply to court for the opening of reorganisation proceedings. A reorganisation proceeding can only bind unsecured creditors (and secured creditors to the extent that their claim is under-secured). The debtor may also apply for the opening of reorganisation proceedings after insolvency proceedings have been opened as long as such proceedings have not been concluded. An application for the opening of reorganisation proceedings must include a reorganisation plan offering payment of at least 20 per cent of the claims to unsecured creditors within two years of the approval of the reorganisation plan. The court will appoint a reorganisation administrator who is in charge of the company until the reorganisation plan is approved. The approval of the reorganisation plan requires a majority of (unsecured) creditors holding more than 50 per cent of the aggregate claims of those (unsecured) creditors present at the relevant court hearing. Alternatively, the debtor can apply for reorganisation proceedings with self-administration. In such a case, the reorganisation plan has to provide an offer for the payment of at least 30 per cent of the (unsecured) creditor’s claims within two years after approval. An inventory of assets, a current status report as well as a liquidity plan for the following 90 days has to be provided at the time of application. The advantage of reorganisation proceedings with self-administration is that the debtor does not lose control over the assets to an insolvency administrator, allowing the debtor to retain control over its business and the proceedings. Only for legal acts that are not considered to be in the ordinary course of business is the reorganisation administrator’s approval required. Note that only an insolvency administrator can take voidance actions, hence these are not available in a reorganisation. If the reorganisation plan is not approved within 90 days from the beginning of the proceedings, the self-administration will be revoked and an insolvency administrator will be appointed. During the continuation of the proceedings under the supervision of the insolvency administrator, the reorganisation plan itself can still be approved by the creditors. The approval of the reorganisation plan results in the conclusion of the insolvency proceedings and the termination of the insolvency administrator’s appointment. Furthermore, the debtor is relieved of the obligations towards its creditors exceeding the quota offered in the reorganisation plan. Creditors can only set off their claims in accordance with the quota of the reorganisation plan. Whereas, before the approval of the plan, it is possible to set off the entire claim (provided general requirements are met (see question 36)). A debtor who is neither insolvent nor over-indebted may also apply to the court for the opening of reorganisation proceedings under the Business Reorganisation Law. If certain financial ratios are not met, an application for reorganisation is mandatory. The application should include a reorganisation plan, which may be supplied up to 60 days after the filing of the application. The court will appoint a reorganisation auditor to examine and assess the reorganisation plan. As already mentioned, the opening of reorganisation proceedings under the Business Reorganisation Law will not change the situation of creditors as this reorganisation is not an insolvency proceeding. Secured creditors are creditors holding a secured right over the debtor’s assets (lien, mortgage, etc). Preferential claims include the costs of the reorganisation proceedings, various disbursements of operating costs and expenses (eg, claims of employees for normal salary accruing after the opening of the reorganisation procedure) and remuneration for certain creditors’ associations as defined by law. Mandatory features of a reorganisation plan include full satisfaction of all secured and preferential claims, as well as the debtor’s offer to pay to all unsecured creditors at least 20 per cent of the outstanding claims within two years after the approval of the reorganisation plan. In the case of reorganisation proceedings with self-administration, the debtor has to offer the payment of a quota of at least 30 per cent (as well as satisfaction in full of all secured and preferential claims). The reorganisation plan must be approved by unsecured and non-preferential creditors representing more than 50 per cent in value of the total outstanding unsecured, non-preferential debts, as well as the (simple) majority of the creditors (by headcount) that are present at the reorganisation hearing. Generally, the reorganisation plan must treat all unsecured and non-preferential creditors equally. Deviations from this principle are possible if the reorganisation plan is approved by the majority of the unsecured creditors present at the reorganisation hearing (by headcount) and creditors representing at least 75 per cent of the outstanding unsecured non-preferential debt. The Insolvency Code does not foresee the possibility that a reorganisation plan includes releases in favour of third parties. The reorganisation auditor has to agree on the restructuring plan and the court has to approve this. The creditors have no right of objection. Each (individual) creditor may also apply for the opening of insolvency proceedings with respect to a debtor. The creditor will need to establish that the debtor is insolvent (ie, either illiquid or over-indebted without a going concern prognosis, although, realistically, a creditor will usually only be able to demonstrate the former) and that he or she has a valid claim against the debtor, even if this claim is not yet due for payment. If the court is satisfied that the insolvency test is met, the court will open insolvency proceedings without undue delay after the creditor’s application. The effects of the commencement of the insolvency proceedings - where there are sufficient funds available to bear the costs of the insolvency proceedings - are the same as described in question 6. Only the debtor may file an application for the commencement of reorganisation proceedings. Creditors may only apply for the initiation of insolvency proceedings with respect to a debtor (see question 9). Reorganisations may be ‘pre-packaged’ or structured within certain limits. This may be the case if the offered settlement does not meet the minimum targets (notably, the satisfaction quota) imposed by law (see question 8) and therefore one or several large creditors need to subordinate their claims for the reorganisation to be approved by the court. If the reorganisation plan does not secure the necessary majority and quorum of the creditors’ vote during the reorganisation hearing, it fails. Furthermore, the court can, and in some circumstances has a duty to, reject a reorganisation plan even though it has been approved by the creditors (eg, if material regulations have not been complied with, or if the reorganisation plan favours certain creditors). If the reorganisation plan is not approved, the reorganisation proceedings are continued as insolvency proceedings. The approved reorganisation plan may be actively monitored by a reorganisation administrator if agreed upon in the reorganisation plan. During such supervision, the court may issue protective measures with regard to the debtor’s assets and may veto certain legal transactions. If a debtor defaults on its payment to a particular creditor, the creditor has to notify the debtor of this and grant it a two-week grace period. If the debtor is still unable to fulfil its obligations after such period, the creditor’s original claim is re-established in its totality (ie, not only in the reorganisation quota). Despite a default with respect to a particular creditor, the reorganisation plan and the quota remains in effect with respect to those creditors on whom the debtor has not defaulted. If a reorganisation plan under the Business Reorganisation Law is not approved by the court, reorganisation proceedings must be closed. General company law provides for standard procedures for dissolving a corporation (called ‘voluntary liquidation’ under Austrian law; see question 6). Such procedures are quite different from insolvency proceedings and do not require any involvement of the court, apart from removing the business from the commercial register. In a voluntary liquidation, all creditors must be fully satisfied. A corporation is already dissolved by operation of mandatory Austrian law upon the opening of insolvency proceedings. In place of the corporation, its assets form the insolvent’s estate, which is sold off and the proceeds are eventually distributed to the creditors. Insolvency cases are concluded by a formal order of the insolvency or reorganisation court after all conditions for the closing of the procedure have been fulfilled. An Austrian debtor is deemed to be insolvent when it is either illiquid or (in the case of corporate entities) over-indebted. According to case law, a debtor is illiquid when it lacks the means to pay all of its liabilities that are currently due. Liabilities due in the future (even if they are already known) are not taken into consideration for this test. The inability to satisfy liabilities when due constitutes illiquidity only if it is permanent rather than merely temporary (as a result of any cash-flow restrictions). According to case law, a debtor is over-indebted if: the assets (based on their liquidation value) would not be sufficient to satisfy all of its creditors, and a business forecast shows that the debtor is likely to become illiquid (ie, unable to pay its debts) in the future and, as a result thereof, will be liquidated. The first limb of the test is objective and will be satisfied if a debtor’s liabilities exceed the value of its realisable assets. It assumes an orderly voluntary liquidation of assets on the valuation date rather than valuing the company as a going concern. The second limb of the test requires an analysis of the probability that the company will become illiquid within a reasonably predictable period (usually at least the current and the following fiscal year). Managing directors of a company must file for insolvency without undue delay, but in any case within the first 60 days of the company becoming illiquid or over-indebted within the definition of the Insolvency Code (see question 15). During the 60-day period, the managing directors may make reasonable efforts to restructure the company or may prepare an application for reorganisation proceedings. The managing directors will be personally liable for the damage inflicted on the company’s creditors by their failure to make a timely application for the opening of insolvency proceedings. As regards existing creditors, the managing directors will be liable for a reduction in the insolvency quota. As regards new creditors, the managing directors will be liable for the damage suffered by such new creditors having placed confidence in the company being solvent. In addition, managing directors will be liable to the company for any payments made to any counterparties while the company was insolvent. It is generally accepted that this does not apply where insolvency proceedings are diligently prepared and where the payment is necessary to protect the position of the company’s general creditors. Other than civil liability, criminal liability may also arise out of crimes such as fraud, disloyalty or specific actions such as the fraudulent preference of a creditor or the fraudulent infringement of the insolvency law. The managing directors of a company are liable to the company for any failure to perform their function in a diligent manner. Any resulting claims the company has against the directors are subject to a five-year limitation. The company may not waive or agree to settle these claims to the extent that payment by the managing directors is required to satisfy the company’s creditors. Directors may also be liable directly to creditors if they failed to file for insolvency (see question 17). Also, the Tax Procedure Act and social security legislation impose personal liability on managing directors to the extent that they have failed to diligently manage funds available to the company, where such funds should have been paid on account of taxes or similar circumstances. Under Austrian social security legislation, a managing director may even be subject to criminal liability for having failed to make pro rata social security contributions on any salary payments that were subject to such contributions. Under the Business Reorganisation Act, managing directors are personally liable for the company’s debt up to €100,000 per individual, if they failed to instigate the opening of business reorganisation proceedings upon having received a report by the company’s auditor stating that the company was in need of reorganisation. This is the case if the company’s equity ratio is less than 8 per cent, and the implied debt settlement period exceeds 15 years, unless an opinion is issued by a certified auditor confirming that there is no need for reorganisation. The liability arises if, within two years of the managing directors receiving the auditor’s report, insolvency is applied for. In certain circumstances, members of the supervisory board or shareholders of a limited liability company may also become liable under the Business Reorganisation Act. In specific circumstances, the managing directors could also be liable under the Austrian Criminal Act for offences such as fraudulent conveyance or intentional preference of a creditor in the state of insolvency. While other employees may also become liable to the company, that liability is limited under the Employee Liability Act. If an insolvency or reorganisation proceeding is likely, the managing directors may proceed to carry out the company’s ordinary course of business in an effort to avert insolvency. These actions may not only include daily business but also transactions necessary to maintain the insolvency estate. Directors must perform these duties with the due diligence of a prudent businessman. If the company does become illiquid or over-indebted, the managing directors must file for insolvency without undue delay, but in any case within the first 60 days. During the 60-day period, the managing directors may make reasonable efforts to restructure the company or may prepare an application for reorganisation proceedings. Only during reorganisation proceedings with self-administration may a debtor (its managing directors respectively) carry on business itself. In reorganisation proceedings with self-administration, the debtor is not required to surrender control of its entire assets to an insolvency administrator. Nevertheless, a court-appointed reorganisation administrator has a right of veto over any ordinary transactions of the debtor and must expressly agree to all transactions of the debtor beyond the ordinary course of business. The reorganisation administrator must also expressly agree to certain specific decisions set out in the Insolvency Code. The sale of assets is also subject to the reorganisation administrator’s approval, to the extent that such a sale does not fall within the scope of the debtor’s ordinary course of business. A sale of assets may also be subject to the approval of the creditors’ committee and the insolvency court. Any actions undertaken by directors or officers during a liquidation or reorganisation proceeding without self-administration are unenforceable. After the opening of such reorganisation proceedings, legal disputes with regard to the insolvent’s assets may no longer be filed against the debtor and pending lawsuits concerning the debtor’s assets will be suspended. Any court order rendered after the opening of insolvency proceedings will be void. Generally, all claims against the debtor must be filed with the insolvency court and examined by the insolvency administrator before litigation proceedings may be continued. Where a creditor had his or her claim rejected in the examination hearing, he or she may initiate proceedings against the debtor. If the court has ordered a stay of the proceedings, the insolvency administrator is entitled to continue the proceedings. In business reorganisation proceedings under the Business Reorganisation Law, pending court proceedings are not affected. Only during reorganisation proceedings with self-administration may a debtor carry on business itself (see question 20). With regard to mutual contracts not yet fulfilled by either party, the debtor may choose either to rescind such contracts or to have them fulfilled by both sides (subject to approval by the reorganisation administrator). To facilitate the continuation of the debtor’s business, termination rights in contracts with the debtor may be limited. If termination of a contract with the debtor could put the continuation of the debtor’s business at risk, the counterparty may, for a period of six months after the opening of the insolvency proceedings, terminate a contract concluded only for ‘good cause’. ‘Ordinary termination’ without good cause is prohibited (for instance, at mutually agreed periods or dates). Furthermore, the deterioration of the debtor’s economic situation and a payment default in relation to obligations due prior to the initiation of the insolvency proceedings do not constitute ‘good cause’ for termination. However, the restrictions do not apply if the termination of a contract is essential to avoid severe personal or economic disadvantages for the counterparty. In case of unjustified exercise of the termination right during the six-month period, the termination will automatically become effective after expiry of the period unless otherwise agreed. Further, termination rights based solely on the initiation of insolvency proceedings are invalid. Only certain financial and derivative contracts, which are usually entered into under master agreements that provide for the mutual set-off of claims (‘close-out netting’), are exempt from this rule. The creditors must file their claims against the debtor in court. The court may appoint a creditors’ committee to supervise the acts of the insolvency or reorganisation administrator. Apart from that, the creditors meet only once, at the reorganisation hearing where the creditors vote on the reorganisation plan. The main duties of the court are to hold the opening hearing and the reorganisation hearing as well as issuing the necessary decisions. In reorganisation proceedings under the Business Reorganisation Law, the conditions for the debtor to carry on business are as described in question 7. In essence, the court opens business reorganisation proceedings, appoints and supervises the reorganisation auditor and closes reorganisation proceedings. The creditors do not have any special rights to supervise the debtor’s business activities. Indeed, they are not affected by the reorganisation. However, certain bridge loans (and similar measures) granted in the reorganisation are, under certain circumstances, protected from avoidance if the reorganisation is not successful and insolvency proceedings are opened. Once insolvency proceedings have been formally opened by the court, the administration of the insolvent’s assets is exclusively conferred upon an insolvency administrator. The insolvency administrator is, in principle, entitled to conclude credit agreements on behalf of the estate. In reorganisation proceedings with self-administration, the debtor is not fully deprived of its ability to enter into transactions with respect to its assets (see question 11). The debtor is, however, prohibited from concluding certain transactions, such as selling real estate or granting sureties. Entering into loans (whether secured or unsecured) is not specifically prohibited. Nevertheless, the debtor must obtain the approval of the reorganisation administrator for any transaction beyond the ordinary course of business. Borrowings might therefore need such approval, depending on the ordinary course of the specific business. In reorganisation proceedings under the Business Reorganisation Law, no specific limitations on post-filing credit apply. In insolvency proceedings, the sale (or lease) of specific immovable assets is subject to the prior approval of the court and the creditors’ committee, and must be publicly announced at least 14 days (in urgent cases, eight days) in advance. The same applies to the sale (or lease) of the debtor’s entire business (or the debtor’s controlling share in a business), the debtor’s entire movable assets (whether fixed assets or current assets) and assets that are necessary for the debtor’s operations. The insolvency administrator must hear the debtor with respect to these transactions before he or she decides to take any action. Generally, assets are sold by the insolvency administrator in a private, out of court sale. A court sale will occur only if determined by the court at the insolvency administrator’s application. Thus, it would be permissible for the insolvency administrator to negotiate an interim sale agreement with one party while continuing to seek better bids. Provisions of Austrian law related to the transfer of liabilities upon the purchase of a business do not apply if the seller of such a business is insolvent. These provisions relate to general liabilities of the seller as well as social security, other pension liabilities and liabilities relating to public charges and taxes. Lease contracts that are filed with the commercial register pass over automatically, but employment contracts do not. Specific assets may be affected by certain encumbrances and will possibly not be transferred clear of such encumbrances. Such encumbrances may, however, lapse upon bona fide acquisition of ownership of the relevant assets. In reorganisations (both with and without self-administration), all transactions (including asset sales) outside the debtor’s ordinary business are subject to the reorganisation administrator’s prior consent. This is also the case for any sale of real estate, the granting of a lien over any asset, the granting of sureties and transactions without due consideration. All other transactions may be vetoed by the reorganisation administrator. As Austrian insolvency law states that in the case of an assignment the legal standing of the debtor may be neither improved nor deteriorated, the same must apply to an assignee of the original secured creditor. However, if the assignee has acquired the claim after the opening of insolvency proceedings, he or she will be deprived of voting rights unless he or she was obliged to acquire the claim because of an agreement set up before the opening of the insolvency proceedings (this rule applies to insolvency and reorganisation proceedings alike). Concerning the transfer of liabilities with certain assets, the same rules apply as in insolvency proceedings, except that employment contracts are transferred to the purchaser of an entire business. Austrian law prohibits credit bidding in a sale of the insolvent’s assets - a creditor only has a claim for receipt of the insolvency quota in insolvency proceedings (principle of equality between creditors). A court would therefore have no discretion to assess a credit bid. Similarly, the credit bid of an assignee of the original secured creditor would not be permitted either. In reorganisation proceedings, it is permissible for the insolvency administrator to negotiate an interim sale agreement with one party while continuing to seek better bids. Credit bidding in a sale of the insolvent’s assets is also permissible as part of the reorganisation plan, provided that the special majority and quorum requirements are met. As credit bidding would result in the unequal treatment of creditors (the credit bidder is privileged), in addition to the general majority and quorum requirements set out in question 8, such reorganisation plan would have to be approved by the majority of the disadvantaged insolvency creditors who are entitled to vote and are physically present at the voting hearing, with the total claims of the consenting creditors amounting to at least 75 per cent of the claims of the disadvantaged insolvency creditors present at the voting hearing. Apart from that, no further specific assessment concerning the credit bid would be necessary. The insolvency administrator has the right to terminate any contract that has not been fulfilled at the time of opening of the insolvency proceedings (see question 19). In a reorganisation, the debtor can terminate employment or lease contracts but only with the consent of the reorganisation administrator. Further, employment contracts may only be terminated in relation to employees that work in such parts of the business that will either be closed or reduced in size or, if the continuation of the business was not published in the insolvency register, after four months of the reorganisation proceedings opening. The reorganisation administrator may only give his or her consent to a termination if the fulfilment of the relevant contract jeopardises the conclusion or fulfilment of the reorganisation plan or the continuation of the debtor’s business. The employee or tenant can claim damages arising from the termination of the respective contract. Such claims are subject to the reorganisation and will be settled only with the quota set out in the reorganisation plan. When the insolvency administrator decides to adopt a contract, he or she must comply with the obligations thereunder. Obligations arising under such contract (and with respect to breaches thereof) after the opening of insolvency proceedings lead to a preferential claim of the third party against the debtor or the debtor’s estate. The licensor or the owner of the IP right has, by operation of law, no right to terminate a contract with the debtor simply because insolvency proceedings are opened over the debtor’s assets. In insolvency proceedings, the insolvency administrator has the right to terminate any commercial contract not yet completed in full at the time the insolvency proceedings are opened. If the contract is terminated, the counterparty may claim damages in the insolvency proceedings as an ordinary unsecured creditor. However, the insolvency administrator, on behalf of the debtor, may elect to adopt the contract, in which case the contract remains in force and the contractual obligations of both parties remain intact and have to be fulfilled in full. The court may set a deadline for the insolvency administrator to declare whether he or she wishes to adopt the contract. Such deadline must not be set earlier than 93 days after the opening of insolvency proceedings. If the debtor is defaulting on a non-monetary obligation, the period for the insolvency administrator to declare his or her position is not more than five working days after the application for declaration by the insolvency administrator by a creditor. Data processing activities during insolvency proceedings are governed by the General Data Protection Regulation (Regulation (EU) 2016/679 of the European parliament and of the Council of 27 April 2016; GDPR) and the Austrian Data Protection Act 2018 (DSG 2018). The DSG 2018 complements the GDPR’s framework using various opening clauses. Under the GDPR, the debtor’s obligation to disclose any necessary information to the insolvency administrator must not infringe the data subject’s right to protection of personal data. Further, the insolvency administrator is required to safeguard the interests of the relevant data subjects (eg, the debtor’s employees and customers). As long as the debtor has lawfully processed the data to be disclosed, the disclosure of non-sensitive data can be justified based on the legitimate interests pursued by the controller or by a third party, except where overriding interests of the data subject exist. Additionally, the GDPR permits the disclosure (and subsequent processing) as long as legitimate interests of the controller or any third party are not overridden by the interests of the data subject. In general, the disclosure of sensitive data (eg, data relating to a natural person’s race, political opinion, trade union membership, religion, health or sexuality) can only be justified by the data subject’s explicit consent. Transfer of personal data to a purchaser is also subject to the provisions set out in the GDPR. If the purchase of a debtor’s personal data is conducted via an asset deal (ie, a third party acquires some or all of the operating entity’s assets containing personal data), the transfer of such data can only be justified as set out above, (ie, the transfer of non-sensitive data may be justified, except where overriding interests of the data subject exist). Additionally, commencing on 25 May 2018, new and extensive information requirements exist according to which the debtor must inform the respective data subjects about the data transfer. For transferring sensitive data, the affected data subjects’ individual explicit consent has to be obtained prior to such transfer. If personal data collected by the debtor is purchased via a share deal (ie, the purchaser acquires the shares of the (insolvent) operating entity from the entity’s shareholders) the GDPR does not restrict the transfer of sensitive or non-sensitive data to the purchaser. Generally, arbitration procedures are rarely used in insolvency proceedings. In insolvency proceedings, the insolvency administrator is generally not bound by any arbitration agreements entered into by the debtor, except for the circumstances described in the following paragraphs. The insolvency court has sole and exclusive jurisdiction to hear the subject matter of the insolvency case. Any prior arbitration agreement between the debtor and its creditors with respect to the conduct and subject matter of insolvency proceedings would be void. As a consequence, an arbitration proceeding would only take place in cases where the administrator agrees to a (renewed) arbitration agreement after the initiation of the insolvency proceedings. Based on the voidance rules set out in the Insolvency Code, if insolvency proceedings are opened, the insolvency administrator has the right to challenge the validity of certain business transactions concluded by the debtor prior to the opening of insolvency proceedings. The debtor cannot validly enter into an arbitration agreement with respect to such proceedings prior to the opening of insolvency proceedings. Additionally, the insolvency administrator would not be bound by such an agreement because the voidance claims arise only after the opening of insolvency proceedings and for the benefit of the insolvent’s estate. The debtor cannot legally dispose of such claims. However, the insolvency administrator may enter into arbitration proceedings on its own account with respect to voidance claims, but this possibility is very rarely used. Where the insolvency administrator has adopted a contract (see questions 20 and 22), he or she is bound by the contractual provisions and any arbitration agreement contained therein. Apart from contractual proceedings, an insolvency administrator is typically engaged in court proceedings with respect to some of the creditors’ own property that was commingled with the insolvent’s estate, or with respect to the realisation of security relating to some creditors’ secured claims. Legal scholars hold the view that, in these cases, the insolvency court has no exclusive jurisdiction to hear such proceedings. Consequently, the insolvency administrator remains bound by any arbitration agreement concluded between the debtor and its creditors or third parties and the court will allow any pending arbitration proceedings to continue. Except for the aforementioned prohibitions, disputes can be arbitrated with the consent of the parties and the insolvency administrator after the insolvency case has been opened. Out-of-court enforcement over the debtor’s assets is possible if these assets have been provided to a creditor as security and out-of-court enforcement has been agreed in the agreement for the provision of such security. As long as no insolvency proceedings have been opened, unsecured creditors may enforce their claims (court judgments, enforceable notarial deeds, etc) according to the provisions of the Austrian Enforcement Code. In these proceedings, an unsecured creditor may, among others, apply for the compulsory creation of a mortgage over the debtor’s real property. Normally, however, enforcement would be directed against the property, receivables, rights and any other assets of the debtor. Procedures under the Enforcement Code are usually time-consuming, in particular if they involve the forced administration or forced sale of real property. The decision on the opening of insolvency proceedings, as well as other decisions issued by the court, must be published. All notices of decisions of the court are published on www.edikte.justiz.gv.at for a limited period. the reporting hearing at which the insolvency administrator submits a report on the status of the proceedings. Other meetings can be held at the court’s discretion or if requested by the insolvency administrator, the creditors’ committee or at least two creditors representing claims of at least one-quarter of the total claims (secured and unsecured) against the debtor. All meetings are called by the court and published on the internet. In the reporting hearing, the insolvency administrator reports on the prerequisites for the closing of the business or parts of the business or the continuation thereof, as well as on any reorganisation plan and its viability. The insolvency administrator has to give a statement of accounts at the end of the insolvency proceedings and whenever the court issues instructions to do so. Each member of the creditors’ committee may file an application with the court to have the insolvency administrator removed from office. Additionally, the court may at any time remove the insolvency administrator on its own initiative. Upon final confirmation of the reorganisation plan, the debtor is released from its liabilities in accordance with the reorganisation plan. However, a reorganisation plan may not provide for the release of liabilities owed by third parties. Therefore, while the debtor may also be released from its liabilities towards jointly liable parties (eg, guarantors), all such jointly liable parties will remain liable to the debtor’s creditors. If the debtor is in default of its payment obligations under the reorganisation plan, the original liabilities may be reinstated, provided that the creditor has given due and timely notice of the default. In principle, the liabilities are reinstated proportionally (ie, if 75 per cent of the insolvency quota has already been paid, 25 per cent of the original liability will be reinstated). Thus, provided that the quota pertaining to a certain liability has been paid in its entirety according to the reorganisation plan, such original liability will not be reinstated. In general, the reorganisation plan may not deviate from this provision to the detriment of the debtors. If the whole reorganisation plan is annulled, different rules will apply. The creditors’ committee, consisting of three to seven members, is appointed by the court on its own initiative or upon application by the creditors, if the particular features of the case indicate that a creditors’ committee is necessary. In practice, a creditors’ committee is established in all large-scale insolvency cases. The appointment has to be based on proposals by the creditors, representatives of the debtor’s employees and other special interest groups. The creditors’ committee has to supervise and support the appointed insolvency administrator and approve the sale or the lease of the debtor’s business and all of the debtor’s movable or immovable assets. Furthermore, the creditors’ committee has to audit the cash administered by the insolvency administrator. Members of the creditors’ committee may not claim any remuneration beyond the compensation of their expenses, such as travelling expenses and necessary costs of experts. Generally, if the insolvency court determines that the available assets are insufficient even to cover the costs of instituting insolvency proceedings, it will dismiss the application for the opening of insolvency proceedings for lack of funds. If a claim is available to the estate and the court determines that this claim is worth pursuing, but the estate lacks adequate funds to do so, it may oblige the creditor that filed the application for the opening of insolvency proceedings to advance funds to enable the insolvency administrator to pursue the claim. Managing directors of legal entities and shareholders holding more than 50 per cent of such legal entity’s shares can be held liable to pay a proportion of the anticipated costs to cover the insolvency proceedings. Where insolvency proceedings are not initiated because of a lack of funds, neither the debtor nor the creditors would benefit from the effects of insolvency proceedings. During insolvency proceedings, no creditor may initiate proceedings on behalf of the debtor to pursue remedies (such as voidance proceedings) against third parties. Only the insolvency administrator is entitled to do so. After the opening of insolvency proceedings, creditors have to submit a notification of their claims to the court. The deadline for filing creditors’ claims is established by the court in its order to open insolvency proceedings. Claims may also be filed after the deadline, but such claims will not disturb preceding distributions to the creditors. Creditors who file late do not have the right to appeal other claims that have been filed in time. The insolvency administrator accepts or rejects the notified claim at the examination hearing and any creditor may dispute the validity or priority of the claim. Confirmation of a claim by the insolvency administrator has a binding effect with respect to its amount, but not as to whether such claim is a preferential claim or an unsecured claim. Creditors whose claims are rejected by the insolvency administrator or denied by the other creditors (ie, those with contested claims) may bring an application for the court’s confirmation that their claims are valid. Contingent claims may be notified to the court with their complete (maximum) amounts. In the event of suspensive conditions (ie, where the claim arises only after the condition has been met), the quota relating to such contingent claim will be secured by the court and paid to the creditor only after the relevant condition has in fact been met. In the event of resolutive conditions (ie, where an existing claim is extinguished when the condition has been met), the quota relating to such claim may either be secured by the court or ordinarily paid to the creditor, provided that in exchange the creditor provides security to the court in the event that the resolutive condition is met and the claim is extinguished thereafter and the creditor has to pay back the quota. Unliquidated claims may also be notified to the court. The notification has to provide an estimate by the creditor of the claim’s value as at the opening of the insolvency proceedings. The estimate may be challenged by the administrator and, as a result, the court decides upon the value of the claim by appointing expert witnesses. Claims acquired at a discount can still be enforced for their full face value. However, a party is not entitled to set off an obligation it has regarding the insolvency estate with a claim it has acquired after the initiation of insolvency proceedings (and under certain circumstances when the third party knew or ought to have known of the insolvency of the common debtor, even before the initiation of insolvency proceedings). Interest accruing from the date of the opening of insolvency proceedings cannot be claimed as an insolvency claim during the proceedings. However, the opening of reorganisation proceedings does not stop interest from accruing, unless the parties agree on a discharge of residual debt during the course of such proceedings. Generally, creditors are entitled to exercise their rights of set-off and netting in pending insolvency proceedings, provided that the claims to be compensated were mutual at the time of the opening of insolvency proceedings. A creditor may not, however, set-off a claim that arose before the formal opening of insolvency proceedings if the creditor knew, or should have known, of the debtor’s illiquidity. Importantly, as opposed to the general rules of civil law, claims that have not become due at the time of the opening of insolvency proceedings, as well as claims that are subject to a condition, may be set off in the insolvency. Special netting rules apply under the Financial Collateral Act. In reorganisation proceedings under the Business Reorganisation Law, the situation of the creditors is not affected. Therefore, no special rules on set-off apply. In general, Austrian insolvency law is based on the principle that in insolvency proceedings all creditors rank equally. However, secured creditors enjoy priority to the extent of their security rights. Preferential creditors also enjoy priority (see question 33). Claims of creditors whose claims arose after the opening of insolvency proceedings rank above other claims. Only if the insolvency administrator challenges the claim of a particular creditor may the court decide that the creditor’s claim is, in fact, different in nature from that alleged by the creditor. The administrator may then assign it to a different class, thereby also changing its priority. However, this happens infrequently and the question in most cases is whether the security of a secured creditor is valid. the remuneration of certain creditors’ associations that participate in the proceedings. Claims accrued prior to the opening of the proceedings (including taxes, social security contributions, wages and salaries) are not privileged. Secured creditors’ claims are not affected by the insolvency. However, if the enforcement of such rights threatens the continuation of the insolvent’s business, satisfaction of such claims may be postponed for a period of six months after the beginning of the insolvency proceedings. Post-opening claims are not satisfied from valid security rights of a creditor (with the exception of costs having arisen specifically with respect to the disposal of the security). The employee’s ordinary wages accrued prior to the opening of insolvency proceedings are deemed to be insolvency claims. Ordinary wages accrued after the opening of insolvency proceedings are privileged and will be satisfied prior to the insolvency claims of unsecured creditors. the reporting hearing, provided that the court has decided to continue the business. Employees can terminate their employment by early termination for cause, subject to the same periods of time as set out in the previous sentence, whereas the opening of insolvency proceedings shall be deemed to be an important cause. When the termination of an employee’s contract is based on the administrator’s privileged right, the termination compensation of the employee, according to Austrian employment law (eg, holiday compensation, severance compensation and other damages), is deemed to be an unsecured claim. If the termination of an employee’s contract does not fulfil the preconditions of the aforementioned right of the administrator, the termination compensation will be satisfied prior to the insolvency claims of unsecured creditors. in businesses with normally more than 600 employees - at least 30 employees. Failure to notify the AMS results in the respective terminations being void. A violation of the information obligations towards the works council can be punished with an administrative penalty of up to €2,180 (section 160, ArbVG). If an employee is entitled to receive a pension payment directly from his or her employer (direct pension promise) and the insolvency proceedings are opened during an employee’s pay-out phase (ie, following the employee’s retirement), the retired employee is entitled to a maximum of six monthly pension payments prior to the effective date (ie, the opening of insolvency proceedings); for outstanding pension payments after the effective date, benefits securing entitlements and pension, severance and settlement amounts are capped at a maximum of 24 months, or up to 12 months if the pension promise is not subject to the Austrian Company Pensions Act. Such claims will be covered by a fund established solely for the benefit of employees in the event of the employer’s insolvency under Austrian law (Insolvency Compensation Funds; IEF). Deficiencies accrued prior to the opening of insolvency proceedings are deemed to be unsecured claims, whereas deficiencies accrued after the opening of insolvency proceedings are privileged. The latter will be satisfied prior to the insolvency claims of unsecured creditors. If an employee is entitled to a direct pension promise and the insolvency proceedings are opened before an employee’s pay-out phase and the employment relationship is terminated as a result of the insolvency, the employee is entitled to vested benefits and rights. The vesting amount is covered by the IEF up to pension, severance or settlement amounts of 24 months. If an employee is entitled to receive pension payments from a third-party pension fund, such claims of both an active or a retired employee against third-party pension funds are not affected by the employer’s insolvency. Unpaid employer contributions (for active employees) are deemed to be current wages, so for the period prior to the opening of insolvency proceedings they are insolvency claims, and for the period thereafter, they constitute privileged claims that will be satisfied prior to the insolvency claims of unsecured creditors. Until termination of employment, employer contributions are covered by the IEF. A retired employee’s claim for an additional payment into the occupational defined-benefit pension plan, if any, is qualified as an insolvency claim (as this claim arose before the insolvency proceedings were opened). After the initiation of insolvency proceedings, public regulations, including environmental regulations, continue to be relevant for the affected parties. The debtor’s obligation to take all necessary measures regarding environmental requirements persists. Because the insolvency administrator takes over all duties related to the insolvency estate, the administrator also represents the debtor in dealing with the authorities, including with respect to environmental matters. Where the relevant requirements are not met, the public authority may initiate substitute performance. Costs arising as a result thereof after the initiation of insolvency proceedings are preferential costs and are therefore incurred to the detriment of the general insolvency creditors. If insolvency proceedings are terminated, the creditors would again have the right to pursue all their claims against the debtor without limitation. This holds true for any type of liability. Any funds received by them during the insolvency proceedings would be taken into account. However, if the debtor is a commercial entity and the insolvency proceedings lead to a liquidation of the debtor, the debtor would be deleted from the commercial register and cease to exist after the termination of the insolvency proceedings (unless assets of the debtor emerge, in which case the debtor would be deemed to continue in existence). Distributions may only be made after the general examination hearing has been held. The final distribution may only take place after all assets have been sold, all decisions have been issued by the courts on contested creditors’ claims, the insolvency administrator’s fees have been determined and the final accounts of the insolvency administrator have been approved by the court. This can only be done on the basis of a draft distribution document and a distribution hearing. In reorganisation proceedings, payments must be made in accordance with the approved reorganisation plan. The two principal types of security available for immovable property are mortgages and the transfer of title in property. In a mortgage, the debtor remains the owner. In a transfer of title in property, the transferee is registered as the owner but merely holds the property as a trustee for the transferor. Both types of security are valid only when registered with the Land Registry. The priority of one of several mortgages on the same piece of immovable property usually depends on the chronological order of the entry into the Land Registry. The principal types of security available for movable property are pledges and transfers of title for the purpose of taking security. The most common is the assignment of receivables as a security device. However, for such assignments and pledges to be effective as regards third parties, strict publicity requirements must be complied with. For example, for receivables, by notification of the assignment to the third-party debtor or alternatively, by appropriate notes in the assignor’s accounts from which it is readily ascertainable when and in whose favour the assignment was made. The priority of a pledge or assignment depends on the time the publicity requirement was met. Only the insolvency administrator is entitled to challenge transactions undertaken by the debtor prior to the opening of insolvency proceedings (covering liquidations as well as reorganisations) during the respective ‘suspect period’. In this respect, the Insolvency Code provides for various cases of voidance on a number of grounds and with different suspect periods. The decision lies within the insolvency court. For example, transactions in which the debtor intentionally put certain creditors at a disadvantage relative to one or several other creditors who knew of such an intention result in a suspect period starting 10 years before the opening of insolvency proceedings. In other cases, suspect periods range between six months and two years. Such cases include: the transfer of assets without due consideration (two years); provision of security or settlement of an obligation not due at such time (one year); and business transactions with the insolvent debtor when the counterparty knew or should have known of the insolvency (six months). The provisions, and relevant settled case law, are complicated and sometimes make it difficult to predict whether a particular transaction may become subject to voidance in a future insolvency. If the voidance motion is successful, the transaction will be declared as being without any effect as regards the other creditors. In this respect, annulment of transactions as described in question 46 should be taken into consideration, as any provision of security or settlement of an obligation towards the parent or affiliated company not due at such time (60 days before the opening of the insolvency proceedings) could be challenged by the insolvency administrator and payments made to these ‘insiders’ clawed back. Under the Austrian law on equitable substitution, loans from shareholders to companies suffering a ‘crisis’ (when applying for insolvency proceedings or ‘reorganisations’ under the Business Reorganisation Law) are classified as substitutions of equity and are therefore treated differently. According to the Insolvency Code, shareholders’ claims in this respect are subordinated and can only be satisfied after satisfaction of all unsecured and preferential claims and only if the insolvency court agrees to accept these claims in the course of the insolvency proceedings. Shareholder loans granted outside of a ‘crisis’ rank pari passu with other senior claims. infringement of a legal nature: if the shareholder abuses the legal structure of the subsidiary or affiliate in order to minimise liabilities. Moreover, Austrian capital maintenance rules may also give rise to claims of subsidiaries or affiliates against their parents or affiliated corporations if they breach the foregoing rules. Austrian corporate law prohibits the return of equity from a company to its shareholder. A company may not make any payments to shareholders other than the distribution of profit or during the course of a formal reduction of statutory capital. Provisions on the repayment of capital also cover benefits granted by the company to its shareholders where no ‘adequate consideration’ is received in return. Such consideration must, as a minimum standard, be no lower than a comparable consideration that the company would have received from an unrelated third party. Any agreement between a company and its shareholder or any third party granting an advantage to the shareholder that would not, or not in the same way, have been granted for the benefit of an unrelated third party is void and any profit received has to be returned. In insolvency proceedings, the insolvency administrator can enforce this claim against the parent or affiliated corporation. In the case of an Austrian stock corporation, claims can be enforced directly by the creditors of the subsidiary or affiliate. Austrian case law has clearly stated that with respect to group companies considered one economic entity, the principle of legal separation must be respected regardless of economic considerations. This applies not only for the purpose of general corporate law, but also specifically with respect to insolvency law. Moreover, it was reiterated that in insolvency proceedings there can be only one debtor - the individual company whose assets must be considered individually. Thus, the transfer of assets between several insolvent debtors is prohibited and a court cannot order the distribution of company assets among these, even if they are companies within the same group. Under Austrian insolvency law, insolvency proceedings against a parent and its subsidiary may only be combined for procedural purposes and must be heard by the same judge. The proceedings themselves remain independent of one another and the assets and liabilities are not combined into one pool for distribution purposes. According to article 49 of the EU Council Regulation (EU) 848/2015 on Insolvency Proceedings, any assets remaining in Austria shall be transferred to an administrator outside of Austria only if it is possible to meet all claims in Austria by the liquidation of assets in Austrian secondary proceedings. If insolvency proceedings of members of a group of companies are opened, Austrian insolvency law provides for the application of the rules on cooperation and communication according to articles 56 to 60 and on coordination pursuant to articles 61 to 77 EU Council Regulation (EU) 848/2015. The Insolvency Code includes rules on cross-border insolvency proceedings. These provisions apply insofar as no international treaty or the EU Council Regulation (EU) 848/2015 on Insolvency Proceedings is applicable. Most importantly, assets located outside Austria may become the subject of insolvency proceedings in Austria. Further, Austrian courts will recognise and enforce foreign insolvency proceedings insofar as the standards of the foreign insolvency proceeding are comparable to Austrian insolvency proceedings and provided that the debtor’s centre of main interests is located in the foreign jurisdiction. Generally, according to Austrian conflict-of-laws provisions, the laws of the place where the insolvency proceeding is initiated govern the entire proceedings. Special conflict-of-laws provisions apply in certain situations or matters (eg, real property). These principles also apply to reorganisation proceedings. Directives 2001/17/EC on the reorganisation and winding-up of insurance undertakings (replaced by Directive 2009/138/EC) and 2001/24/EC on the reorganisation and winding up of credit institutions were implemented in Austria. Austria is also subject to the EU Regulation on Insolvency Proceedings replacing existing bilateral insolvency treaties. The UNCITRAL Model Law on Cross-Border Insolvency is under consideration in Austria. There are ongoing working sessions of the ‘special task force for insolvency law’ of the Ministry of Justice. Generally, foreign creditors are treated on an equal footing with Austrian creditors during insolvency proceedings taking place in Austria, and are free to file the same applications and notifications of claims as Austrian creditors. However, they must appoint a person residing in Austria who is empowered to accept service on behalf of the foreign debtor. According to article 49 of EU Council Regulation (EU) 848/2015 on Insolvency Proceedings, any assets remaining in Austria shall be transferred to an administrator outside of Austria only if it is possible to meet all claims in Austria by the liquidation of assets in Austrian secondary proceedings. Other than such transfer of surplus assets, Austrian law does not provide a mechanism to transfer assets subject to insolvency proceedings in Austria to an administration in another country. The definition of COMI emerges from European Union Law. There is a general presumption that the COMI of a corporate debtor is at its registered office. See further in the chapter on the European Union. Austrian courts focus on objective criteria and therefore the COMI should be ascertainable by third parties. This presumption can be rebutted whenever there are signs indicating that the main administration is in another country. In the case of a group insolvency, the COMI of each subsidiary has to be determined individually. The Insolvency Code allows for cross-border cooperation in several ways. The Austrian insolvency court and the Austrian administrator have to provide to the foreign administrator any information deemed to be of importance for conducting the foreign insolvency proceedings without undue delay. Furthermore, the foreign administrator shall be granted an opportunity to submit its own proposals relating to the liquidation or the utilisation of assets located in Austria or to submit statements in relation to reorganisation plans. In addition, in the case of recognition of foreign insolvency proceedings, the foreign administrator may also exercise the powers granted to it by local laws in Austria except with regard to coercive actions and decisions over legal or other disputes. The Austrian Supreme Court has not yet dealt with a case where a lower court has refused to recognise foreign proceedings or to cooperate with foreign courts. According to the Insolvency Code, the effects of foreign insolvency proceedings are recognised if the debtor’s centre of main interests lies within a foreign country and the basic principles of these proceedings are similar to those in Austria, in particular the treatment of Austrian and foreign debtors (see question 50). Within the European Union, any insolvency proceedings are recognised in other member states as soon as the opening of the proceedings are in effect (see the chapter on the European Union). We are not aware of a case where recognition has been refused. We are not aware of any such protocols or hearings. There is no basis for these in Austrian law as currently in force.
2019-04-23T22:54:57Z
https://gettingthedealthrough.com/area/35/jurisdiction/25/restructuring-insolvency-austria/
This article documents, shows and analyses the everyday rhythms of Billingsgate, London's wholesale fish market. It takes the form of a short film based an audio-visual montage of time-lapse photography and sound recordings, and a textual account of the dimensions of market life revealed by this montage. Inspired by Henri Lefebvre's Rhythmanalysis, and the embodied experience of moving through and sensing the market, the film renders the elusive quality of the market and the work that takes place within it to make it happen. The composite of audio-visual recordings immerses viewers in the space and atmosphere of the market and allows us to perceive and analyse rhythms, patterns, flows, interactions, temporalities and interconnections of market work, themes that this article discusses. The film is thereby both a means of showing market life and an analytic tool for making sense of it. This article critically considers the documentation, evocation and analysis of time and space in this way. 1.1 Billingsgate is London's wholesale fish market and the UK's largest inland market. Since 1982, the 'new' Billingsgate market has been located on the Isle of Dogs in East London, next to Canary Wharf, after moving out of the City of London where it operated for several hundred years. It's a tightly defined temporal and spatial frame for the exchange and physical redistribution of goods (Harvey, Quilley and Benyon 2002: 202) – in this case, fish – in 'a flow of dispersion-concentration-dispersion' (ibid., 205). So, how does the market take shape each day (from Tuesday to Saturday at least)? What temporal patterns and routines structure it? And how does the activity and movement of people and fish produce the space-time of the market? 1.2 In a conversation with a Billingsgate Fish Inspector early one morning, he tells me that the market doesn't close, it only rests. After some time, I begin to recognise what he means. There are spatial shifts in where activity is taking place - from the unloading bays, to the market floor, to the offices, each location populated by variable combinations of workers and customers at different times - but something is always happening somewhere, often at a different pace. I tend to arrive at the market by four o'clock in the morning when trade officially begins – the earliest time that fish are legally permitted to leave the market - or once it is in full flow at five or six. Inspired by Henri Lefebvre's (2004) call to attend to rhythm, I wander around letting myself get caught up in or 'grasped by' the rhythms, noises, tensions, buzz, chill and thrill of the place, hanging out with fish merchants, inspectors, porters, and traders. By the time I leave when the cafe shuts for the day – half past nine in the week, eleven o'clock on a Saturday – I have the sense of having been there accumulated through time. But when I am there, I notice that I keep looking over my shoulder. It's a bodily expression of the uncertain sense of where exactly the market is happening, of its 'perpetually forming and deforming atmosphere' (Anderson 2009: 79). It's as if the market comes to life behind your back then slips away before you had the chance to take it in. Immersion in the market it seems is an obstacle to perception of its rhythms (Lefebvre 2004: 28; Borch, Bondo Hansen, and Lange 2015: 1085). 1.3 The idea of making a film based on time-lapse photography came out of this sense of the intangibility of the market space as an attempt to capture the temporal structure of the market. Along with sound, it offers a means to evoke a 'sense of time as motion and transformation' (Crang 2001: 201). The elusive quality of the market chimes with the concerns of urban scholars on the character of the city, and the challenge of grasping the 'excesses of embodied and situated experience' (Merchant 2011: 60). Many researchers have turned to the work of Lefebvre to interrogate the spatial and temporal properties of the everyday, notably his last and posthumously published work, Rhythmanalysis (Lefebvre 2004 ). Rhythmanalysis, argues Edensor, is effective for revealing how places are 'seething with emergent properties, but usually stabilised by regular patterns of flow' (2010a: 3). It calls our attention to the 'patterning of a range of temporalities' and the ways in which places might be characterised by the 'ensemble of rhythms' that permeate them (Edensor 2010b: 69). 1.4 This article makes both a methodological and a substantive contribution, highlighting the links between a specific methodology - audio-visual montage as a means of doing rhythmanalysis - and the insights that can be generated – of the relationships between rhythm, atmosphere and mobility, and interconnections and interactions in market space. The construction of the montage is explicitly artificial. If we accept that all methods help to produce the reality they study (Law and Urry 2004), this deliberate undertaking to render rhythm does not diminish the significance of the presence of rhythms in the real-time flow of the everyday. Since such rhythms are however hard to grasp in their immediacy, montage offers an effective 'medium of anthropological [or ethnographic] inquiry' (Grimshaw 2011: 248) and 'a tool of multi-sensory and affective discovery' (Garrett and Hawkins 2015: 147). 1.5 The film both describes rhythm and does 'a distinctive form of analytical work' (Grimshaw 2011: 257). It reveals the rhythms that arise from what happens in the market, including the ways in which they intersect, co-exist, and clash, and how they underlie the experience and atmospheres of being in the market. Following Grimshaw, we can think of the 'synesthetic, spatial and temporal properties of film' as offering 'suggestive possibilities between the experiential and propositional, between the perceptual and conceptual' (2011: 257-258). This audio-visual form may stand alone as an ethnographic document that contains its own analytical gestures. However, if 'images and written texts not only tell us things differently, they tell us different things' (MacDougall 1998: 257 in Grimshaw 2011: 257), then there is more to say. It is therefore worth elaborating on the themes that can be further analysed through the montage and in dialogue with knowledges produced in other ways. 1.6 In addition to the contribution this article makes to thinking and working with rhythm and the audio-visual, it tells us something substantive about 'market time' (Bestor 2001: 92). In so doing, it also adds to existing knowledge of fish markets per se and offers different insights about markets and work more broadly. Economists have famously explored loyalty, price dispersion and economic interaction in fish markets (e.g. Kirman and Vriend 2001; Graddy and Hall 2011; Cirillo, Tedeschi and Gallegati 2012) which have also attracted interest from management scholars (e.g. Curchod 2010). Analyses of the everyday include Bestor's (2004) well-known ethnography of the world's largest fish market, Tsukiji in Japan, its cultures and social institutions, and smaller studies focussing on gender relations (Hapke and Ayyankeril 2004) and humour (Porcu 2005). Interestingly, whilst Mayhew (2008 : 162-163) was already aware of the performative dimension of the fish trade in the nineteenth century, Billingsgate has received surprisingly scant attention given its historical and logistical importance in the UK and its reputation globally (although see Bird 1958). This article offers new understandings of the temporal patterns of market life. 1.7 In the next two sections, I discuss doing rhythmanalysis and critically consider the methodology of this study before presenting the film. I then explore three sets of themes brought to the fore through the film. The first is the atmospheres and temporalities of market life; second, the rhythms, pathways and movement through market space; and third, working bodies and things (including the fish itself) in space and time. I conclude with a discussion of the gains and limitations of using audio-visual montage to make sense of market life. 2.1 In recent years, there has been growing interest in Henri Lefebvre's last and posthumously published work, Rhythmanalysis (1992 ). The publication of the English translation in 2004 reinvigorated interest in the concept (Elden 2006) which has since been critiqued, elaborated and applied. In Rhythmanalysis Lefebvre – in collaboration with his wife, Catherine Régulier – rethinks the everyday and urban life – key themes of his life's work - through the notion of rhythm in an attempt to grasp space and time together (Elden 2006: 186). However, Lefebvre did not provide scholars with a systematic methodology for rhythmanalysis (Highmore 2002: 177). Instead, it is understood more as an 'orientation' (Highmore 2002: 175) for attending to the social world, 'more an investigative disposition' than a 'method for systematic enquiry' (Hall, Lashua and Coffey 2008: 1028), or 'a suggestive vein of temporal thinking' (Edensor 2013: 190). 2.2 That said, several scholars have explored ways of operationalising rhythmanalysis, in particular within geography (Edensor 2010a; Stratford 2015). The most significant body of work relates to the study of mobility, for instance, walking (Edensor 2010b), dancing (Edensor and Bowdler 2015), ferry travel (Vannini 2012a), coach travel (Edensor and Holloway 2008), cycling (Spinney e.g. 2010), and commuting (Edensor 2011). In addition, rhythmanalysis has been used to study place, in historical accounts of urban street life (Highmore 2002), everyday routines in contemporary urban spaces (Chen 2013; Hall 2010; Smith and Hall 2013;Sgibnev 2015), and street performance in public space (Simpson 2008). It has also been employed to analyse socio-economic processes, for instance in the study of financial markets (Borch, Bondo Hansen, and Lang 2015), and to explore 'natural' or socio-natural rhythms (Jones e.g. 2011: Evans and Jones 2008). 2.3 For Lefebvre, the body is the central tool to apprehend rhythm: 'to grasp a rhythm, it is necessary to have been grasped by it', he argues (2004: 27, emphasis in original). Yet, 'in order to grasp and analyse rhythms, it is necessary to get outside them, but not completely' (ibid.), to allow the rhythmanalyst to reflect on and disentangle as well as to feel rhythm. Lefebvre continues: 'He [the rhythmanalyst] must simultaneously catch a rhythm and perceive it within the whole, in the same way as non-analysts, people, perceive it' (ibid., 20, emphasis in original). There is some ambivalence here (Crang 2001: 202) but the bodily experience of rhythm remains key to appreciating an 'assemblage of different beats', 'temporality not just tempo' (ibid. 189, 194). 2.4 Lefebvre was clear in his reservations about using tools outside of the body to apprehend rhythm: 'no camera, no image or series of images can show these rhythms' (2004: 36). Yet his insistence that 'the rhythmanalyst calls on all his senses' (ibid., 21) chimes with stances adopted by researchers who seek to tune into the environment using visual and sensory techniques in immersive ethnographies today. Indeed, visual and sensory methods have emerged as part of broader move across the social sciences to 'engage the senses' (Pink 2005), offering a way of retaining the vitality and dynamism of the social world in our accounts of it (Back 2007; Back and Puwar 2012). This includes attending to and evoking the material world that may be overlooked or inaccessible through talk and text, and recognising distributed agency and practices across human and non-human entities. Pink (2007) argues that video in particular appeals to multiple senses, not least since the senses themselves are connected so that we might '"read" touch, smell and taste' (Merchant 2011: 66) from audio-visual images. 2.5 Several scholars have employed audio/visual methods to undertake rhythmanalysis. Pryke's (2000, 2002) creative use of audio and photomontage disrupts linear accounts of city space. Evans and Jones' (2008) enchanting aural representations of environmental data are a transdisciplinary application of rhythmanalysis across the human and non-human. Hall, Lashua and Coffey argue that making use of sounds alongside the visual offers 'clues to the rhythms of an urban everyday', through which it is possible 'to open up this polyrhythmic complexity' (2008: 1028). Brown and Spinney's (2010) use of head-cam video produces insights into the ordinary and mundane rhythms of cycling whilst Jungnickel's (2015) use of mobile time-lapse photography of cycling shows the vivid quality of 'failed images' and the sense they convey of 'being there'. Simpson's (2008, 2012) innovative use of time-lapse photography to trace everyday rhythms in his study of street performers is closest to the approach I take here. In addition, I add 'real-time' sound to the speeded-up time-lapse images in an audio-visual montage. I now discuss how I conducted the research and put the film together. 3.1 The fieldwork for this research was conducted in autumn 2012 during which time I made two or three visits a week to Billingsgate, following up on contacts I had initially made in autumn 2009 (Lyon 2010). I spent several months hanging and wandering around, eavesdropping, looking, asking questions and making connections in the market explaining that I was trying to understand the work that goes on there. I chatted informally with traders and other workers, attended what formal public events were held during this period, both on site and at Fishmongers' Hall, explored behind-the-scenes, and participated in courses offered by Billingsgate Seafood Training School. After getting comfortable in these spaces and conversations, I approached key people for an interview. In all, I carried out twenty-five formal interviews (recorded once the market was closed) or informal but structured conversations (prolonged in situ discussions supported by note-making) with fish merchants, salespeople, inspectors and porters. I also shadowed fish inspectors and porters as they went about their ordinary work. With the permission of the City of London Corporation which owns the market, and of specific traders, I took photographs of workers, interactions, spaces and displays. In this process, I had regular debriefing conversations with several key informants who also facilitated my interactions with others. 3.2 Billingsgate is a socially homogenous space, dominated numerically and culturally by older (over fifty), white, working-class men. Often without high levels of formal qualifications, many of the long-established traders are nevertheless affluent. Newcomers to the market include south Asian and Indian sellers of so-called 'exotic' fish. Salespeople, 'shop boys', porters and cleaners work for a wage, and the porters – now fish-handlers – have lost significant status in recent years. The cultural tone of the market, characterised by banter and playfulness, structured many of my own interactions with traders and salespeople. Having grown up in south east London, the market had a familiar and appealing ring. I genuinely got on well with the traders and inspectors in particular and developed a considerable fondness and regard for them. In short, I was enchanted by the place. 3.3 Good relations in the market were essential for obtaining formal and informal acceptance for the project and for the film. How these dynamics might have shaped what I saw and how I came to see it (Orrico 2014: 2) is not easy to trace. It might be that traditional gender relations in which I was always treated respectfully – men refrained from or apologised for swearing in my presence, and welcomed as a guest (Gherardi 1996) - offered a research advantage as my 'feminine' presence did not discomfort. I wonder if my educated status (perhaps read as a proxy for class) meant that I was perceived as unavailable despite my openness about living alone, and protected me from more sexualised exchanges, although I was regularly questioned about my living arrangements and family relations. That said, there was no shortage of displays of 'competitive' and 'paternalistic' masculinity (Kerfoot and Knights 1993) amongst groups of men, and performances directed at the women who work in the cafes or who come to the market as customers. More troubling were the occasional racist remarks I overheard from traders, mostly directed at employees. I sidestepped these situations so as not to appear complicit but, given my own precarious and temporary belonging, did not challenge them as I might in other settings. 3.4 Whilst the market extends across and beyond the site, the market hall is central to the everyday life of Billingsgate, as a space of display, interaction, movement, negotiation and exchange, and it is where trade can be observed. The architecture of Billingsgate offers a particular opportunity for seeing its 'temporal unfolding' (Simpson 2012: 431) from the vantage point of a gallery at either end of the first floor overlooking the market hall. I repeatedly found myself climbing the stairs and looking down on the market hall from this gallery, taking stock of the mood and patterning of the day's activities. It was here that the concrete possibility of making a film based on time-lapse photography began to take shape. My body and senses were clearly instruments of this research (Turner and Norwood 2013), especially in alerting me to the dispersed feel of the market and sparking this way of apprehending market space. 3.5 I found a collaborator - film-maker Kevin Reynolds, of veryMovingPictures – through a serendipitous encounter at a seminar he was filming and a conversation which revealed a shared way of looking at space. We chose one ordinary night - 11 December 2012 – to make it all happen. Just after midnight, we set up two digital cameras (one as a backup) in the first floor gallery location looking down the length of the market hall above the beginning of the central isle of the market hall. This position allowed us to look along as well as down on the market floor, the perspective emphasising a viewpoint into the distance and proximity in the foreground, conveying a sense of immersion. The composition suggests the continuity of the space beyond what can be seen in the foreground and so keeps the viewer located within and not wholly above it. Most of the frame is taken up with the market floor, reaching up to the level of the clock suspended from the ceiling at the centre the market hall, which also offered us the opportunity to explicitly mark time for the viewer. This is a deliberate choice which heightens the unitary sense of place and the intensity of what goes on there. But it is a particular view. Another location, up high within the market space looking down on it from a bird's eye view or at floor level would certainly have conveyed a different sense of space (Simpson 2012: 430). 3.6 We took photographs from just before one o'clock in the morning until midday, from market set-up until the market hall was (almost) still. The interval between photographs was ten seconds, based on Kevin's technical judgement and my preferred aesthetic that this would not mask and excessively 'smooth' the conditions of making the film. The deep depth of field allowed everything in the frame to be kept in focus. The fire glass against which the lens was pressed (and held in place with tape) is visible, also evidencing the making of the film, and situating the camera and the viewer in place. The sound of the market hall is muted in this gallery location but every hour (at varying times within that hour), I went down to the market floor to do some 'soundwalking' (Hall, Lashua and Coffey 2008). Slipping into the flow of the crowd I made brief recordings on a hand-held digital recorder of whatever sounds were in my path. 3.7 The resultant film - or audio-visual montage - is a combination of a selection of these sounds with the sequence of images speeded up so one hour is presented in thirty seconds. This is a deliberate calibration based on judgement and experimentation that this speed would both distil patterns and flows and immerse the viewer in the experience of the space. Once the speed and thereby duration of the film were settled, we edited the twelve sound bites (one of which was unusable). We started at four o'clock in the morning, synchronising the sound of the bell that signals the beginning of trade with the visual that shows the clock striking four, and worked backwards and forwards from there. With the exception of one instance towards the end of the film for which I had not recorded the key sound of trolleys, we were strict about only using sound from the relevant hour. We experimented with different combinations in which sound and image would fit but also surprise and make us notice afresh. In the central section of the film when the activity was at its most intense, we layered sound, a fiction that felt more 'true' than the un-manipulated data. Finally, Kevin 'cleaned' the film, making small adjustments of light and colour to avoid calling the viewer's attention (undue in our judgement) to alternations in the camera settings as the environmental conditions changed. This was a 'creative-analytic process' (Garrett and Hawkins 2015: 146) in which we sought to evoke Billingsgate with an 'affective force' (ibid, 145) that goes beyond representation. 3.8 Billingsgate attracts significant media coverage and interest from amateur and professional filmmakers and photographers (as well as marine biologists). Workers are therefore used to requests for their labour to be observed and documented. Some relish these occasions through polished performances, others keep a low profile. Given this public face of the market, the people I spoke to were well positioned to give informed consent for my questioning. A representative of the City of London Corporation gave me formal permission to take photographs at the market and individual traders gave their verbal consent. In addition, I obtained specific permission from the Corporation to make the film and to make it available in the public domain (via YouTube). I spoke individually to any identifiable traders and secured their verbal consent in advance. On the night we made the film, numerous traders came to ask what was happening and I reminded them of our endeavour to capture the daily sequences and rhythms of market life. Once a draft of the film was complete, I made it accessible for previewing on a private YouTube channel and asked those traders and other identifiable actors to inform me of any objections to making the film public. No one came forward. Since making the film public on YouTube, it has (at the time of writing) been viewed over fourteen thousand times. At the market, reaction to the film was marked by pride in what workers saw as a positive representation of their working lives and community. 4.1 On the night we photograph the market, we try to get to Billingsgate ahead of anyone starting work on the market floor. Despite arriving at midnight, we're not the first there. But we do get a feel of the place when it's still and quiet which the film conveys, especially in contrast to what follows. It's chilly – minus three degrees centigrade outside, and at least as cold inside – and the first sounds of preparation for the day can be heard clearly, echoing in and marking out the space around. 4.2 Day breaks inside the market hall from the left of the screen as the bright lights get switched on for traders to set up ahead of daylight seeping in from the outside when the sun comes up. The colours become more vivid, first dominated by the yellow of the overhead structure, the green of the floor, then the white of the trader's clean coats. By three o'clock in the morning, the lids are lifted on sample boxes of fish which are carefully placed on display. The glistening combination of skin, scales and ice adds to the intensity of the morning brightness and spectrum of colour. It's a new beginning and there's a sense of anticipation in the air. 4.3 Telephones start to ring and the sound of the market hall imposes itself. Once it is set up for trade, the soundscape is intense, sometimes jarring or confusing as multiple rhythms compete for attention. It's possible to discern 'layers' (Makagon and Neumann 2008) as we hear the close ring of a telephone, someone shouting nearby, or the pervasive squeak of the polystyrene boxes being moved around. Walking around the market – the way we made the sound recordings for the film – accentuates this. 4.5 We can see from shifts in activity and interaction how these multiple atmospheres are 'produced' (Bohme 1993: 116) through specific forms of labour and action in the market – the arrangement of the display, movements through space, the chatter and clamour of the sale. An atmosphere 'proceeds from and is created by things, persons, or their constellations' (Bohme 1993: 122), and the activities that intertwine them. These activities create and sustain the presence of atmospheres which in turn do things - they set the tone of exchange and as such they matter for how trade happens. We absorb atmospheres and feel them as a 'bodily state of being' (ibid., 118) which fits very well with Lefebvre's call to be grasped by the rhythms of a place in order to perceive it. Whenever I arrived in Billingsgate, I felt myself absorb the atmosphere as a 'spatial bearers of moods' (ibid., 119), transforming the experience of temporality – was I really this lively at four o'clock in the morning? 4.6 Whilst I asked questions about the length and organisation of the working day, the film is a more powerful statement of the 'rhythmic production of space' (Pryke 2000) and the pace and excitement of market life (Dixon and Straughan 2010: 455) than quotations from interviews. In it we can see 'different temporal itineraries that constitute social space' (Sharma 2014: 5). The presence of the clock at the top of the images emphasises variation in the intensity of work, the duration of different tasks and activities, and the tiring repetition across whole stretches of time, enhanced by the mesmeric quality of time-lapse. It also hints at 'the complexity of lived time' (ibid.:, 6) and the 'micropolitics of temporal coordination' (ibid., 7) with the world outside Billingsgate such as two-shift sleeping, calculated convergence with others' lives, patterns in which some bodies recalibrate to the time of others (ibid., 20). 4.7 The 'sense of time' (Vannini 2012a: 243) of Billingsgate in a broader set of urban and natural rhythms is also evident once the market is still for the day. It's a contained world when trade is at its peak, but once the inside lights are switched off and daylight is seen reflected in the wet floor, the viewer recognises the rhythms of the market as being at odds with the city space around it – an instance of arrhythmia. What we cannot observe however is the anticipation of rhythms that are beyond the present that make the market happen the next day, and the one after – orders placed and deals done in a process that extends well beyond the temporal frame of the film. 4.8 The sensory and affective experience of being in the market is suggested by the film, through the play of light, sound and movement. In particular, the surreal juxtaposition of the speeded up images with real-time sound connects viewers to the felt-experiential aspects of moving through space-time. If we take seriously that the senses themselves are interconnected – some viewers comment that they feel chilly watching it - we might go further to claim that the film, albeit grounded in sound and vision also evokes the olfactory. 'Odours seem not to obey rhythms', remarks Lefebvre (2004: 41), but continually move in and around bodies and space, something borne out by recent research on smell (e.g.Riach and Warren 2015). There's something in the air at Billingsgate – of the sea, the ice, moving bodies, and the still fish - and perhaps in the imagination of the viewer too. 5.1 The architecture of the market hall is very clear from the vantage point of the film, especially the central aisle which produces the encounters within it as customers 'walk' the paths made for them (Harvey Quilley and Benyon 2002, 206 on Covent Garden), the 'verges' lined with fish. There are around fifty traders at shops or stands (some beyond the frame) around the market hall and back to back in four 'corridors' lengthways, with several cross-cutting paths at intervals along them. The green of the floor indicates public space, the space belonging to the stands demarcated in dark red. These boundaries are fluid however as buyers step behind the displays to make a more private deal, and boxes and trolleys encumber the main paths. 5.2 The relative immobility of the sales staff and merchants contrasts with buyers on the move. Groups form and scatter, like an uneven pulse, stimulated by sounds or gestures or the lure of the fish in ways that the film does not allow us to appreciate at a micro-level. Indeed, there are multiple co-existing rhythms when trade is in full swing. Whilst there is some jostling at the stands, what is striking is that this destabilising of the boundaries between self and other (Dixon and Straughan 2010: 454) leads to is an accommodation of bodies in space and a fluidity of the movement of the crowd (ibid.: 455) –a synchronisation enhanced by the speed of the film. This includes the movement of porters in the market hall facilitated by cries of 'mind your legs', '… your legs!' and the rumble of trolleys (as heard in the film). They manoeuvre their loads skilfully, or cope with how their loads gain momentum – the trolleys have no brakes and cannot easily be stopped – acting as 'pacemakers' (Parkes and Thrift 1979: 360) in the market space. Customers and other workers familiar with the market are tuned in to 'sensory information about the physical and social environment' (Dixon and Straughan 2010: 449) and smoothly and spontaneously move out of the way. The visitors on tours of the market with Billingsgate Seafood Training School stand out for their lack of bodily comprehension of the space – as did I at first. But I soon absorbed the rhythms of market life, an instance of what Lefebvre calls dressage, to 'bend oneself (to be bent) to its ways' (2004: 39). 5.3 The rhythms, movement and intensity of the market change through the night. The fish merchants and salespeople move around fast but steadily when the space is clear, until they are slowed by the presence of the fish which leads to more uneven rhythms - darting here and there, squeezing past obstacles, negotiating with things on the move. The ring of the bell at four o'clock in the morning signals the legal start of trade when there is a subtle shift in speed and activity on the part of traders and buyers familiar with this repeated daily marker of official time. Movements gather pace and the pace gathers. We feel the polyrhythmic character of the market, including those stretches marked by a repetitive monotony. There is a linear character to the market, from preparation through to the sale and closing and cleaning up when the market is over, portrayed in the film by the steady and swift movement of the hands of the clock at the top of the frame. But there is more. Alongside these lines through time, recursive loops, 'repetition, rupture and resumption' (Lefebvre 2004: 78) in an 'always emergent interaction' of linear and cyclical (Simpson 2008: 823) interfere with one another. At different intervals, the fish inspectors punctuate market time. When watching the film, their commentary revealed their 'skilled vision' (Grasseni 2004) and intimate knowledge of the space and its routines as they could see themselves at work through the blur of images. 5.4 There is a looseness about the end point of the market as different stall-holders close up at different speeds and schedules. We see an urgency in the movement of some traders to get off the floor – for instance those in the foreground of the film on the left hand side of the main aisle who pride themselves in a swift set-up and finish. Traders on the right of the main central aisle in the foreground seem to be making the most of the energetic efforts of the young man cleaning thoroughly (who was perhaps making the most of the chance to perform for the camera!). In contrast, others appear to relish this stretch as slow time to sort through stock – as we see at the back of the central aisle. 6.1 Amongst other things, the market is an economic space. At Billingsgate trade takes place 'pair-wise' in a one-to-one negotiation – as opposed to an auction for instance - this institutional form producing particular work activities, interaction and sense of place. Deals are made through swift gestures and quick judgements. Speed is key in finding the right fish at the right price. The film cannot capture the nuances of these interactions but it conveys the feel of people passing through as if they have somewhere else to be. 6.2 Lefebvre's arguments that space is enacted through physical gestures and movements (Merrifield 2000: 177), and that rhythms inhabit the body can be seen very clearly at Billingsgate. Embodied and sensory workers are tuned into the environment around them, to one another, and to their working tools and materials (Hockey and Allen-Collinson 2009). There is a 'transpersonal dimension' (Anderson 2009) to how they work together, co-ordinating tasks and movements (Lyon and Back 2012: 5.12) through a non-reflective practical knowledge. And they are tuned into the fish. The stands present 'samples' which require sorting, organising, displaying, and maintaining in a liminal state. The fish is relentlessly iced and checked for temperature. It is 'aestheticized and staged in the sphere of exchange' (Bohme 2003: 72). There is more checking, sorting and counting at the end of the time on the floor. Indeed, there are moments in the everyday rhythms of the market when human-fish assemblages become the dominating presence. Years of accrued skill and tacit knowledge practised in embodied routines make for smooth and continuous assemblages of fish and merchant, intertwined and caught up in the momentum of what they are doing. The body and the fish are 'relationally coupled with space and time' (Abrahamsson and Simpson 2011: 332). The speed and blur of the film both limits and emphasises our perceptions here. 6.3 The sound track of the film also alerts the viewer to the materiality of the market. The squeak of the polystyrene boxes is omnipresent. The boxes resist being lined up closely together, protesting loudly at times, and neither are they quietly acquiescent when picked up by porters wearing gloves or with wet hands. Their clamour makes us pay attention to the materiality of work, to the nature of the objects that occupy the working space alongside people and fish. 6.4 We can see the interconnections of jobs and bodies in time and space through the presence – at different times and in different combinations - of porters, fish merchants, sales people, buyers, inspectors, and finally cleaners. In the top left corner of the screen, a light goes on and off from around half past eight in the morning, indicating someone at work in one of the fish merchants' offices on the first floor. Coincidentally, he is the father of the two men in charge of the stand in the foreground on the left of the central corridor. His being there, apparently working at a desk, is a reminder of the paper and electronic counterpart to the sale on the market floor, and co-existing rhythms of work that are removed from the fish itself. 7.1 Billingsgate is a rich and sensuous world that demands multiple forms of attentiveness from those who inhabit it. It's an example of a space that it difficult to grasp – perhaps the sort of place that we can only be grasped by over time – and that makes it something of a problem to study. Rhythmanalysis 'provides a practical vocabulary' (Simpson 2008: 823) that allows us to interrogate the patterns of the everyday activities of the market which create time-space (Crang 2001: 187). This article shows how working with rhythmanalysis through the medium of film can generate new understandings of the 'rhythmic ordering' (Simpson 2012: 424) of work. What we learn from this approach is the specific way in which 'time makes space into place' (Parkes and Thrift 1979: 353) in market life. Rhythm can be seen to operate at different scales and to different beats. The institutional rhythms of clock-time may structure the working night and day but the market is not characterised by this linearity alone. Within market time, we see and hear the cyclical repetitions of the sale and the movement of fish. We notice the differentiated rhythms of buyers and sellers and the temporalities of different types of work activity in harmony or at odds with one another. 7.2 The process of manipulating time in time-lapse photography through speeding up static and sequential images and rendering them as film is a mode of data reduction or compression. However, what is made thinner offers, I would argue following Taylor (1996: 86), a thicker description of the market space, especially once combined with sound. It offers a 'mentally prolonged space[s]' in which attention is repeatedly renewed (Lefebvre 2004: 33), 'a sort of meditation on time' (ibid., 30). By losing the richness of the detail, we sidestep the sensory overload that live presence and video entail, and begin to distinguish some threads amongst this 'temporal, material, technological and cultural tangle' (Sharma 2014: 4). As from Lefebvre's window, 'the flows separate out, rhythms respond to one another' (2004: 28). The speed of the film cannot capture the depth and richness of sensuous experience as still or 'real-time' moving images might (Merchant 2011; Lyon and Back 2012), but instead, it affords time as an 'experience of flow' (Crang 2001: 206). In particular, time-lapse delivers the 'spatio-temporal unfolding of everyday events' over a longer duration and reveals 'how various rhythms and routines interrelate and interfere' (Simpson 2012: 440). 7.3 At the same time, audio-visual montage, as presented in this article, loses some felt aspects of embodied and affective experience (Simpson 2011) both on the part of the researcher and what can be sensed of the participants. For instance, there is no room in this form to linger on the detail of embodied skills and knowledge. Furthermore, the reliance on what can be observed runs the risk of neglecting all that lies beyond the empirical. Social dynamics which underpin trade and relations in Billingsgate have enormous reach in time and space but are not easily made visible. The tight and consistent framing of the film is suggestive of a wider world however, for instance when people enter and exit the screen, and this at least may unsettle any easy sense that the viewer is getting the full picture. Similarly, the surreal effect of the juxtaposition of speeded up time-lapse photography with real-time sound conveys that this re-presentation of market life cannot be read literally. 7.4 Whilst there is a long tradition of using film in ethnography, and considerable enthusiasm in recent years about visual and sensory methods (e.g. Pink 2005), including video-making (e.g.Bates 2015), audio-visual montage offers new opportunities for interrogating and rendering social fields (Vannini 2012b). It allows researchers to be both 'inventive' (Lury and Wakeford 2012), in this case for the insights than can emerge about market life, and 'live' (Back and Puwar 2012) in an 'appeal to the senses' (Grimshaw 2011: 259, footnote 9), as a means of retaining the vitality, textures and rhythms of the social. I am grateful for the support of a small grant from the British Academy for the project 'Working with Fish from Sea to Table' (2010-2012) (reference no: SG100889); for the kind permission of the City of London Corporation to make the film; and for the collaboration of the fish merchants, inspectors and workers of Billingsgate Fish Market. Thank you to colleagues for helpful suggestions, especially Giulia Carabelli and Lynne Pettinger. 3 There appear to be more women in the market on a Saturday when the profile of customers is domestic as well as commercial. 4 This compares with twenty-five frames per second in film. 5 On the first-floor there also offices of the Clerk and Superintendent's office, the Fish Merchants Association, inspectors, maintenance, police and first aid, and the Seafood Training School. ANDERSON, B (2009) 'Affective atmospheres', Emotion, Space and Society, Vol. 2, p.77-81. BACK, L (2007) The Art of Listening, Oxford: Berg. BATES, C (ed) (2015) Video Methods, Social Science Research in Motion. Abingdon: Routledge. BESTOR, TC (2001) 'Supply-side Sushi: Commodity, market and the global city', American Anthropologist, Vol. 103, No. 1, p. 76-95. BIRD, J (1958) Billingsgate: A Central Metropolitan Market', The Geographical Journal, Vol. 124, No. 4, p. 464-475. BOHME, G (2003) 'Contribution to the critique of the aesthetic economy', Thesis Eleven, No. 73, p. 71-82. BORCH, C, K Bondo Hansen and A-C Lange (2015) 'Markets, bodies, and rhythms: A rhythmanalysis of financial markets from open-outcry trading to high-frequency trading', Environment and Planning D, Vol. 33, No. 6, p. 1080-1097. BROWN, K and J Spinney (2010) 'Catching a glimpse: The value of video in evoking, understanding, and representing the practice of cycling'. In Fincham, B, McGuinness, M, and Murray, L (eds) Mobile Methodologies. Basingstoke, Palgrave Macmillan. CHEN, Y (2013) ''Walking with': A Rhythmanalysis of London's East End', Culture Unbound, Vol. 3, p.531-549. EDENSOR, T and J Holloway (2008) 'Rhythmanalysing the Coach Tour: The Ring of Kerry, Ireland', Transactions of the Institute of British Geographers, Vol. 33, p. 483-501. EDENSOR, T (2010b) 'Walking in rhythms: place, regulation, style and the flow of experience' Visual Studies, Vol. 25. No 1, p. 69-79. EDENSOR, T and C Bowdler (2015) 'Site-specific dance: revealing and contesting the ludic qualities, everyday rhythms, and embodied habits of place', Environment and Planning A, Vol. 47, p.709-726. EVANS, J and P Jones (2008) 'Towards Lefebvrian Socio-Nature? A film about rhythm, nature and science' Geography Compass, Vol. 2, No. 3, p. 659-670. GRASSENI, C (2004) 'Skilled vision. An apprenticeship in breeding aesthetics', Social Anthropology 12(1): p. 41-55. HARVEY, M, S Quilley and H Benyon (2002) Exploring the Tomato, Transformations of Nature, Society and Economy. Cheltenham: Edward Elgar. JONES, O (2011) 'Lunar-solar rhythmpatterns: Towards the material cultures of tides', Environment and Planning A, Vol. 43, No. 10, p. 2285-2303. JUNGNICKEL, K (2015) 'Jumps, stutters, blurs and other failed images: Using time-lapse video in cycling research'. In C Bates (ed) Video Methods, Social Science Research in Motion. Abingdon: Routledge. LAW, J and J Urry (2004) 'Enacting the social' Economy and Society, Vol. 33, no. 3, p. 39-410. LURY, C and N Wakeford (2012) Inventive Methods: The happening of the social. Abingdon: Routledge. LYON, D and L Back (2012) 'Fish and fishmongers in a global city: socio-economy, craft, and social relations on a London market', Sociological Research Online 1Vol. 17, No. 2 http://www.socresonline.org.uk/17/2/23.html. MAKAGON, D and M Neumann (2008) Recording Culture: Audio Documentary and the Ethnographic Experience. London:?Sage. MAYHEW, H (2008) London Labour and the London Poor. Hertfordhsire: Wordswoth Editions. MERCHANT, S (2011) 'The Body and the Senses: Visual Methods, Videography and the Submarine Sensorium', Body & Society, Vol. 17, No.1, p. 53-71. MERRIFIELD, A (2000) 'Henri Lefebvre: a socialist in space' in M Crang and N Thrift (eds) Thinking Space. London and New York: Routledge. PINK, S (2005) The future of visual anthropology: engaging the senses. London: Routledge. SGIBNEV, W (2015) 'Rhythms of being together: public space in Urban Tajikistan through the lens of rhythmanalysis', International Journal of Sociology and Social Policy, Vol. 35, No. 7/8, p. 533-549. SHARMA, S (2014) In the Meantime, Temporality and Cultural Politics. Durham and London: Duke University Press. SIMPSON, P (2008) 'Chronic everyday life: rhythmanalysing street performance', Social & Cultural Geography, Vol. 9, No. 7, p. 807-829. TAYLOR, L (1996) 'Iconophobia', Transition, No.69, p. 64-88. VANNINI, P (2012a) 'In time, out of time, Rhythmanalyzing ferry mobilities', Time & Society, Vol. 21, No.2, p. 241-269. VANNINI, P (2012b) 'Public ethnography and multimodality: research from the book to the web'. In P Vannini (ed) Popularizing Research. New York: Peter Lang.
2019-04-19T08:33:17Z
http://www.socresonline.org.uk/21/3/12.html
We were over the moon when we viewed the site and couldn't believe how much attention to detail had been paid to even the smallest things. We approached AWD after having a horrendous time with our previous web designer and we are so glad we found them! Richard and his team listened to what we wanted, gave us their professional opinions on everything, explained things well and they even created us a new company logo. We were kept in regular contact with the design process and the site was built quicker than we expected. We were over the moon when we viewed the site and couldn't believe how much attention to detail had been paid to even the smallest things. We were really unsure about the content management system but what they set up was very user friendly even for us technophobes! We didn't leave the office until they made sure we were comfortable and fully understood what to do. There were times we got confused and guaranteed they were at the end of the phone to help and fixed any bugs found straight away. This included setting up a temporary email for us late on a Friday afternoon when ours was not working properly. The work has not stopped since the handover as they have worked on optimizing our site on google to help get our name out there more. We cant thank the team enough and will continue to recommend them to people looking for professional, artistic and friendly web designers. The team are conscientious, have an eye for detail, communicate well and are a pleasure to deal with. Having searched for a website design company in Hertfordshire we came across Advanced Web Designs and were immediately impressed by the range of projects in their portfolio. We therefore contacted them where we spoke to Richard to discuss the proposed revamp of our existing site and our desire for it to be a more dynamic and engaging website that would set us apart from our competition. For various reasons we do not immediately move forward with the new website, however when we did finally revisit the project, despite having spoken to and received quotations from numerous website design companies, we remember feeling more comfortable that Richard and the team at Advanced Web Designs had really taken the time to fully understand our business along with our requirements and expectations of the new website. We got back in touch with Richard and within a very short time we had received an initial design concept which not only accurately met our brief but actually exceeded our expectations. The design of a set of characters which were used to graphically represent us and our customers, meant we could visually display our services in a very uncluttered and unique way without having to rely on boring images of boilers. During the process the team at Advanced Web Designs provided us with feedback on how best to achieve our requirements based on their knowledge and experience of website design and development. We found this incredibly helpful and particularly for the more technical areas of the website where we were happy to be led by their recommendations. During the development of the website and since it has gone live we have requested a number of small alterations and additions, and in every case these have been dealt with quickly and professionally. Overall Advanced Web Designs have provided us with an excellent, stress-free service throughout the process. The team are conscientious, have an eye for detail, communicate well and are a pleasure to deal with. Our new website is now working really well for us and we would be more than happy to recommend advanced web designs anyone looking for a new website. Being a brand new business and knowing the importance of having a good website, we carried out a lot of research into different companies and we kept coming back to AWD. We reviewed their completed websites and following some good recommendations we decided to give Rich a call and go through what we needed. Immediately we were comforted with the professional and knowledgeable experience we were looking for, with some great ideas thrown at us already giving us a good understanding of the level of service and enthusiasm we would receive. Having different departments within the company who specialise in each stage of the build, we began our conversations with Oli and phase 1 was underway in no time at all. Within a very short time we were receiving conceptual designs for our approval and with close communication and immediate response times, the homepage was complete and ready to hand over to phase 2 where the back end was to be created. As above, phase 2 went as smooth as it could with excellent communication from Gareth and page by page we completed all the requested tabs, and the website was starting to look real good. Throughout the entire build, the extensive experience the team hold was invaluable offering ideas and helping with laying out the website out so ensure it looks as modern and professional as possible. Once the website was completed we were invited to the offices for a one to one handover and run through the CMS element. Again, the service we received was second to none and a real friendly environment made us feel very comfortable. With a few more minor tweaks the website was launched and we were able to start promoting to our customers with confidence. AWD would come highly recommended in our books to anyone seeking a website build. On behalf of all the team at Etang Negreloube, we can’t thank you enough and look forward to a promising future working together. Advanced Web Designs are incredibly easy to work with. They are a knowledgeable, creative and very friendly team. I approached Advanced Web Designs when The Production Zone relocated to Hertfordshire. Our original website had served us well but was starting to look a little out of date and crucially was not mobile-friendly. From our very first meeting with Richard and the team it was clear that we were in safe hands and we had found the perfect company to design and build our new website. AWD were a pleasure to deal with throughout the process and since the handover and going live have always been there to answer any questions about the CMS and give helpful advice on matters such as SEO. The site AWD created is fantastic and has received so much positive feedback from clients - it's clean, modern looking and easy to navigate. They also updated the company logo and did a great job. I highly recommend AWD to anyone thinking of having a new website. AWD have been immensely helpful in helping me with my business. They have thoughtfully and quickly answered and taken care of all of my queries and have carefully taken a lot of time to take me through every aspect of the construction and ongoing upkeep of my site. They are friendly and straightforward and would recommend them to anyone wishing to seek expert guidance in their field. The various automated systems suggested and developed by the team at AWD have also dramatically decreased the amount of time I need to spend communicating with our trialist enquiries, players and their parents. Initially Richard and his team were working with us to help improve the search engine optimisation of our previous website. Whilst the optimisation improvements on natural listings for our key search words had improved dramatically since Advanced Web Designs got involved, it became apparent that the restrictive nature of the WordPress template in which our previous site had been built, was limiting our success for lesser priority keywords such as the new centres we were opening and football courses we were developing. So despite some initial reservations on my part, I eventually decided to take on AWD’s services to design and develop a new fully responsive website, and in hindsight now wish I had moved on with the project much sooner than I had. As well as having a far more modern design with a greater appeal to my target market, there is also a full content management system which allows us Protec to manage all the content on the website, as well as expand the site as our business expands. Since the site has gone live I have made numerous requests for small personalised tweaks to the website in order to make my life even easier, and on every occasion the team AWD have carried out my requests extremely quickly and efficiently, often providing better solutions and results then I had anticipated. Throughout the process I have found Richard and his team to be highly professional, friendly and approachable. Nothing has been too much trouble and their website design and development skills, coupled with their knowledge of my business and what I needed the website to achieve, has ensured that I have ended up with a website, which is critical to my business, that has exceeded all of my expectations. It goes without saying that I would have no problem at all recommending Advanced Web Designs, and would like to take this opportunity to thank them very much indeed for all the time and effort I know they put into this project. The initial drafts were awesome. We were so happy with what was being produced, we started showing clients the updates before it was even ready and everyone agreed how good it was looking. After spending the last two years trying to build and manage my own home made website (using Wix) and continuously making changes, I was getting stressed out. I realised that the quality of our website didn't match the quality of our service. I then realised that I'm a personal trainer, not a web designer, so it was time to focus on what I'm good at and hire professionals to do what they do best. I searched google and the first name I saw was AWD. We had a look at their portfolio, and called them straight away. Richard invited us in for a meeting and within minutes we felt confident in their ability, so when they showed me what they can do I actually laughed. I couldn't believe the quality of their work and how poor it made my home made website look. This was a humbling experience but I suppose that’s why I was there so we started to get excited. We gave Richard a few ideas of what we were looking for and I remember him saying 'that’s exactly what we were thinking' so I knew we were all on the same page and it made the whole process a lot more efficient. Ironically, Richard falls into our target market, so we actually asked him to make the website so appealing that he considers signing up with us, and with that, everything his team done was exactly what we needed from a website. The idea of the website being content managed was exciting. I know what message I want to get across, so to have control of what pictures and text go where was a great project for me to be involved in. I couldn't believe how simple it was, and what I loved was regardless of how small my questions or problems, all I had to do was drop anyone at AWD an email or phone call and they were always there to help. Once the website went live, we were so proud and it really has catapulted our business in the right direction. The feedback we have received has been great and we now fill so confident that it matches the quality of our company. It's really difficult to stand out in such a saturated market, but now we do, in a big way. The price was very competitive. We are also in an industry where you get what you pay for, and that is exactly what we got using AWD. I still pop in now and again to see Richard and his team, I have also recommended AWD to two of our clients and will continue to do so without hesitation. Richard listened to my ideas and my expectations from a new website and provided some excellent ideas on content, user experience and how best the new website would reach my target audience. It was clear he had researched my type of business and how the website would best support it. Leading up to purchasing County Castles in January 2016 I had identified that the current old website was in need of a complete overhaul. Through recommendations I contacted Richard and his team at AWD. In our first working meeting I was immediately confident that I would receive a website that met my needs. Richard listened to my ideas and my expectations from a new website and provided some excellent ideas on content, user experience and how best the new website would reach my target audience. It was clear he had researched my type of business and how the website would best support it. Throughout the design and build process there has been excellent communication and that still continues to this day. AWD has responded really positively to my needs and requests and have provided invaluable practical and technical advice. Overall from a customer’s point of view AWD have been marvellous, delivered within timescales and overall great value for money. Most importantly I have a fabulous new modern website that supports my business exactly how I wanted it to and crucially is so user friendly for my prospective customer base. I would certainly recommend AWD to anyone looking for a new website for their business. We believe that the website they designed for us has set us apart form all of our competitors. If anybody in business who is looking for a service and product that will double your online enquiries look no further than Richard and his team at AWD! Richard and his team at AWD created my first website some five years ago and when I decided to build a new site to change with times and stay up to date with my competition, I knew from the outset what I would be getting. Having relied on their knowledge and experience previously I knew I would get the same result. The team spent time with me to understand what my target audience and market was and they have created a website that to this day I am complimented on by prospective clients when I first meet them. The design and concept was everything and more than I expected and with their sound knowledge of SEO and ensuring my site is found by potential clients. I now have a modern, smart website that generates more business than ever before and allows my business to continue to grow. Not only is the design a winner but their technical knowledge is spot on. I wanted a website that would express a friendly yet professional feel and that is exactly what I have. Not only is the design a winner but their technical knowledge is spot on. The guys at AWD are very patient and continue to be so even after the launch! Thanks guys look forward to phase 2! They have also managed to get us to (and keep us at) number one on Google under some very genetic key words and terms used within our industry. We have used Advanced Web Designs since 1999 and during this period have forged an excellent relationship with them. During the initial meetings they took the time to understand our business and who our target audience is, in order to design and develop a website that not only reflected what we are about but also that appealed to our large customer base. As we have grown over the years AWD have re-developed the site for us on a number of occasions, each time getting larger and more diverse whilst at the same time somehow keeping the navigation around the site very simple. I have always found them easy to contact and they deal with our many regular websites updates quickly and efficiently. Their ability to keep up-to-date with technology has meant that we have always benefited form having a modern websites and this fact together with their very competitive pricing has once again secured them the job of a big revamp to our website in 2009. We also found the team reliable and available for questions at all times. From the initial meeting, Advanced Web Designs proved to be an inventive and creative company. Richard and his team showed genuine interest in providing a range of bespoke design options for our website concept. We are very pleased with the finished site, which is both modern and attractive. The team’s design skills and knowledge were impressive. I would be happy to recommend AWD, who were a pleasure to work with. We have used Advanced Web Designs for a number of years to design and develop our core business website for ‘JBD sports Events Management’. As they have always provided a great service, and knowing that Richard is an angler himself and has attended the previous three PDC Invitational Fishing Championships, employing AWD for this project was for us a ‘no-brainer’. Richard’s understanding of the event was not only helpful during the design and development process of the website, but also help greatly with the preparation of copy and imagery to populate the website. I was so pleased with the outcome of the site and how it worked on mobile devices, that I have now asked AWD to redevelop our Sports Event’s Management website to a more modern, mobile friendly website in line with the PDC fishing site they have just created for us. I have found Richard and the team at Advanced Web Designs a pleasure to work with and have absolutely no reservations in recommending them to anyone who needs a professionally built website. I've had some great feedback on the site and I would have no hesitation whatsoever in recommending AWD. When searching for a web designer, I wanted a company who were local and who I felt were going to be happy to deal with a start up business and able to advise on the best e-commerce solutions available. I had an initial meeting with Richard who listened carefully and with interest as to what I wanted and by the end of our first meeting, he had showed me another site that they had recently designed, which was exactly what I wanted. Both Richard and Gareth were very attentive and explained things well. They were always quick to attend to any changes that were needed or that I subsequently asked for once the site was completed. The website they designed for us exceeded all our expectations and we’ve had nothing but positive feedback from our new and existing customers. I’ve known Richard since the mid 80’s when we were both fishing at the North Lagoon over in Broxbourne, however we lost contact when I moved over to France to manage Dreamlakes. I know Richard went on to run The Carp Society, however I didn’t really know what he was doing now. When we made the decision we needed a new website, a number of different contacts recommended giving Advanced Web Designs a call because ‘the guy who runs it is also a carp angler’. We felt it was a huge advantage to have the website rebuilt by someone who not only came recommended, but who is an angler themselves, as this means they would understand the target market as well as the jargon use in fishing. When we found out that Advanced Web Designs was Richard’s company the decision to use them was made even easier as I’d always known Richard to be a hardworking and conscientious person. I have no problem recommending Richard and his company to anyone wanting a new website, I only wish we’d found them sooner! I was nervous about using the content management system to populate the website but it is so easy that even I had mastered it within about half an hour. I was recommended Advanced Web Designs by a friend who uses them for his website. I was a little anxious at first as I don’t really understand technology that much, but everyone I dealt with at every stage of the website build was very helpful and took the time to explain in a way I understood. They put together a brief for me with their recommendations which showed they really understood my business and why I wanted a website, and the design they came up with further proved their understanding of the market I want to reach. I can confidently recommend Advanced Web Designs to anyone who is thinking about having a website built. Advanced Web Designs offers an innovative and creative professional service. Time and genuine interest was extended to Fore Street Employment Agency as our website was designed and developed. Their quality of work, technological knowledge and skills demonstrate that AWD can deliver outside the norm. Our website is user friendly, extremely beneficial and has added significant value to our organisation. AWD is easily accessible, responsive and consistent, from our initial meeting to present they continue to provide us with on going efficient solutions to minimise cost and maximise effectiveness. Fore Street Employment Agency is a fast moving business without the luxury of time, AWD understand our business needs and are always on tap with recommendations and support which enables us to move forward. Their contribution and excellent service has given us the freedom to dedicate our time to our clients. AWD are a pleasure to work with. I was extremely pleased to find a local web design company who I felt were going to be happy to deal with my start up business, as well as advise me on the best e-commerce solutions available. I have really enjoyed working with the team at AWD web design, and feel that they have really listened to me. Both Richard and Kyle have been very attentive, explained things well when I have had a query and been happy to meet with me in person to go over any changes I have asked for. Since going live I have had extremely positive feedback on the site and I would have no hesitation whatsoever in recommending AWD to anyone. Their patience, professionalism and creativity produced a website that is vibrant, eye-catching and simple to use. From the moment I walked into the AWD offices in Hertford I knew that Richard and his team would be able to design the website that had been my ‘pipe-dream’ for years. If you want an amazing website designed by an efficient, friendly and talented team then AWD is the company for you. You will not be disappointed! We were recommended Richard and Advanced Web Designs by a mutual friend. My partner and I had just set up a small company to sell Gel Burners, an ornament that can be used indoors or outside to safely burn bio ethanol fuel, which when alight makes for a warming center piece and talking point. Not being very web savvy ourselves, it was refreshing that the basic brief we provide was so well interpreted. From the initial design concept and right the way through the project AWD fully understood the product and provided us with some invaluable advice regarding what was needed to ensure we not only ended up with a functional website, but also one that best promoted our product to our target market. The earthy, natural feel of the design is exactly what we had asked for but the results were much better than expected and their Content Management System is exceptionally simple to use. Now the website is live, Richard and his team continue to be proactive and are always on hand to help us with either our requests for the many additional features we keep adding, or making suggestions for ways the website can help improve our sales. Crate Ideas is a company new in the UK which sells Dog Crate Covers, we aimed to be driven by online sales so therefore needed a website. We went through the regular diligence of recommendations, browsing local companies and internet searches which resulted in a short list of three website developers. We put these through their paces in regard to making sure they could provide the ease of functionality and engaging design; which were the two most important parameters for us. AWD really understood our goals, the product and in addition they put forward some really good ideas for the business. We had no option but to go ahead with them. They have a great team and really easy to work with, nothing is too much trouble; we just needed to ask if something was possible and it was done. Once we gave the go ahead with their initial design the process was simple and efficient. The end result far exceeded our expectations and we are so delighted with our website. I have had some bad experiences with web designers in the past but AWD are friendly and efficient and have proved that good designers do exist! DO NOT HESITATE! I have had some bad experiences with web designers in the past but AWD are friendly and efficient and have proved that good designers do exist! My special thanks to Gareth for coming up with solutions to seemingly simple requests which often required complex programming to achieve the effect or facility we wanted, he was always patient, willing to listen and keen to explain. Gareth and Richard have provided us with a good looking website full of up-to-date visuals and features giving us all that we asked for and more. I cannot recommend their services too highly. Despite giving them a tight deadline to work to, the brief was met with respect to the timescale, design and technical requirements. On recommendation we approached Advanced Web Designs to quote us for a new business venture we were undertaking. We liked what we heard in our initial meeting and as the subsequent quotation fell in with our budget we made the decision to take on their services. Despite giving them a tight deadline to work to, the brief was met with respect to the timescale, design and technical requirements. Throughout the development AWD have responded quickly to our requests and queries, provided us with professional advice and developed a simple to use backend system, enabling us to quickly and easily update key areas of the website ourselves. Richard and his team were a real pleasure to work with throughout every stage of the project. They showed genuine enthusiasm for the project and recommended many features that we hadn't considered but have made the finished website a highly interactive and informative resource for the club, the team managers and parents of all the players. As the Club is run on a shoestring and relies almost wholly on volunteers, from individual team coaches and managers right up to the Chairman, Richard and his team designed, developed and are hosting the our website free of charge, a very generous and unexpected offer but ultimately one that demonstrates the ethos and charitable spirit of the company. Despite this we have still been given the service I would expect to receive as a paying customer and Richard even gave up a Sunday evening to present the new site to the all of the managers and carry out a training session on the very easy to use Content Management System. Clearly I would highly recommend Advanced Web Designs on a number of levels including their professionalism, expertise, willingness and ability to understand or requirements, the technical advice provided and the numerous extra miles they went to ensure the site covered every eventuality. I have been most impressed with the care and quality Advanced Web Designs has provided my company. My initial enquiry and first meeting was dealt with the utmost professionalism and the high level of service has continued even after the launch. I am so pleased with the end result I don’t see myself going anywhere else for my future projects. We have used Advanced Web Designs since we first started our holiday accommodation business 6 years ago and feel Richard and his team have made a significant contribution to our success. The designs have been excellent and have used the latest technology always at an affordable price. Advanced Web Designs are easily accessible and helpful when there is a problem and difficulties are dealt with quickly and efficiently. We have used Advanced Web Design for a number of years now for all our web needs. We have found them to have a clear understanding of our business from all aspects which has been highlighted in not only the design and functionality of our website but also in offering advice with regards to web optimisation, possible revenue streams and new services to offer our online customers. Advanced Web Design has invaluable knowledge about all aspects of online marketing and are able to put their advice into practice. We have found the team at AWD extremely reliable and professional and would recommend them to any business. After speaking to several different web site designers in the area, we decided on using Advanced Web Design as from the start they seemed to understand exactly what our vision was in terms of our website and goals. They were keen to listen to what our requirements were and also made valid suggestions on how we could make the design more efficient and where we could improve. As they are so familiar with website design they also informed us of design concepts and graphic design especially photo imaging which is a major part of our website. Richard and Gareth really are perfectionists with attention to detail and no matter how small or how many times they are willing to help. I would fully recommend them to any of my clients and friends. We at the Bait and Feed Company have used Advanced Web Designs for the last 12 years and during this time they have revolutionised our website and have increased our Internet sales massively. Over the entire time we have used AWD we have found them to be professional, proactive, innovative and extremely competitively priced. During this time they have even provided us with some very sound business advice that has proven invaluable to the success of our company. All of the team at AWD are a very present, polite and easy to deal with. Their systems are extremely easy to use and their after sales service and support are second to none. In the most recent e-commerce website they have developed for us they used reasonably new technologies and methods to overcome our complicated buying and pricing structures. Also by optimising words on our site they have managed to achieve a number 1 placing for our company on Goggle and MSN using generic key industry search terms. I simply can’t praise them enough and not only are we happy to recommend AWD, we would suggest that you don’t waste your time looking elsewhere for a web designer. Advanced Web Designs were recommended to me by my brother-in-law who has used the company to design and develop his own company’s websites over the last 12 years. Despite them admitting to having no prior knowledge of the industry, they took time to familiarise themselves with my business and the bicycle market in which we operate. Following our initial discussions they presented a professional and detailed proposal to us whilst displaying the confident and friendly personalities I look for in all of my business relationships. I was also pleased that they took the time to explain to me ‘in layman’s terms’ the technical jargon associated with such projects. Their initial design concepts proved to me that they fully understood both our brand and our target audience, in fact only a few very minor tweaks were made. During the development of what was initially a standard ‘e-commerce’ website, I asked AWD to quote me for an additional bike builder system. Like the initial quote, the price was exceptional competitive and despite the tight deadlines I was demanding, by altering their programme of work around they also managed to develop the system in time to go live for the London Bike Show where I was exhibiting. The bike builder system in particular threw up a lot of technical scenarios which I hadn’t considered, many of which were actually bought to may attention by the guys at AWD, who impressively also provided me with possible solutions to the problems. Their technical expertise, understanding of what needed to be achieved, attention to detail, competitive price, design skills and most importantly, their patience, are just some of the many reasons I have already recommend them to a business associate and would be happy to recommend their services to anyone looking for a professional web developer. From our initial first meeting we were confident AWD would provide us with all the advice and guidance we would need to create a brilliant website. Unfortunately we didn't listen to our instincts and decided to look around! Having spent months speaking to others, we came back to the start frustrated we had wasted time looking elsewhere, when what we wanted was there at the first meeting. Richard and Gareth have developed a really great, easy to use online system which allows people playing Season Selector to quickly make their predictions by simply dragging and dropping each of the Premiership teams into the players predicted finishing position. Most importantly for me, the system also works out all of the points (which used to take me ages) and displays the immediately. Players can also interact with the website by making comments on the ‘Topic of the Week’, which in previous years has led to some really good banter, particularly amongst the supporters of opposing teams!
2019-04-26T09:04:00Z
https://www.awdltd.co.uk/testimonials/
Prospects for the global economy have evolved in two quite distinct phases during 2002, broadly corresponding with the two halves of the year. The first half of the year was a period of emerging optimism, with most observers expecting a gradual recovery in the global economy after last year's downturn, and the momentum of growth expected to build steadily in all the major economic regions. These perceptions were encouraged by clear signs of stronger growth in a number of countries, particularly in the United States but also in parts of east Asia and, to a lesser extent, in the euro area. In the second half of the year, however, considerable uncertainty has emerged as to whether the momentum of growth will be sustained. The changing mood about the global economic outlook has been most clearly reflected in financial markets. The major changes have been the fall in share prices in all major countries since early in the second quarter of 2002, the fall in long-term government bond yields to 50-year lows, and the widening in spreads on corporate debt. Broader economic data in a number of the major countries have also taken on a softer tone in recent months, suggesting that the modest global recovery underway since the start of this year has weakened. In the US, which has been the strongest of the major economies this year, the expansion to date has been driven mainly by consumer spending, and there is little sign yet of a pick-up in business investment, which would be an essential element of a more durable recovery. Elsewhere, the picture has been very mixed. Growth in the euro area has turned out to be disappointing, after some reasonably promising signs earlier in the year. In Japan, there are some signs of a pick-up in activity, but the economy remains fragile and heavily dependent on export markets. Non-Japan Asia has been the best performing of the major regions, with the Chinese and Korean economies growing strongly, though some of the smaller economies in the region appear to have weakened recently. Overall, the global recovery has remained tentative and has fallen short of the relatively optimistic expectations that were held around the middle of the year. Whether or not the global economy gains greater momentum will depend importantly on the resolution of imbalances still weighing on growth in the major countries. Of particular importance will be the ongoing effects of the cumulative falls in equity prices. These will affect the major economies in a number of ways, including not just through their impact on wealth and confidence but also through their effects on corporate and financial-sector balance sheets. Some of these effects are already being seen, with businesses in the US for example now finding it more difficult or more expensive to raise capital, reflecting perceptions of increased risk. This has been associated with disappointing corporate profits in the US, a result, in part, of an overhang of capacity in some capital-intensive industries. In some respects, financial stresses in the European economies may be more severe than those in the US, given that the falls in equity markets in Europe have generally been larger. Share prices in financial firms in Europe, particularly insurance companies, have shown pronounced falls in recent months. Japan, of course, has its own longstanding imbalances that continue to hamper growth. The overall effect of these forces on the global economy is highly uncertain. In an optimistic scenario, cyclical growth spurred by consumer demand and expansionary policy may be sufficient to wind back the various balance-sheet stresses, but a more pessimistic scenario involving disappointing profits, heightened pressure on balance sheets and weak investment spending in the major economies is also a real possibility. It is not surprising that in this environment there has been a marked reassessment about the outlook for official interest rates in all major countries. In the US, expectations of monetary tightening largely evaporated in the third quarter, and were replaced by expectations of easing, which the Fed delivered in November. Similarly, in Europe, expectations of tightening have been replaced by expectations of easing, though official rates have continued steady to date. The group of mid-sized economies that were raising interest rates in the first half of the year, which includes Australia, have all kept rates steady since at least July; markets do not expect any near-term moves and in some cases expectations of easing are emerging even among these countries. Australian financial markets have not been immune from developments overseas. Share prices in Australia have fallen over the past six months and domestic interest rates, both long term and short term, have adjusted down. Overall, however, domestic financial markets continue to show a good deal more stability than markets overseas, reflecting the steadier path of the domestic economy. In contrast to the tentative nature of the global expansion, the Australian economy has so far continued to grow at a good pace. This performance has been driven by strong growth in domestic demand which, to date, has broadly counterbalanced the dampening effects of the weak external sector. The growth of domestic demand over the past year has been spread across all main components, with consumer spending, housing construction and business investment all contributing strongly. These developments have been associated with above-trend growth in employment, and a declining unemployment rate. Prospects in the period ahead will of course depend importantly on global developments, but will also depend on a range of rather disparate domestic factors, most notably conditions in the business sector, the dynamics of the housing market, and the impact of the drought. Business investment has been an important contributor to growth over the past year. In contrast to the major economies abroad, Australia's business sector is in good shape, with a relatively high level of profits and generally sound balance sheets. In addition the direct effects of declining equity prices in Australia are likely to be more muted than those in other countries, given that Australia's share market has remained relatively resilient. In these circumstances, prospects for further growth in investment are still good, particularly given that the level of investment is coming off a quite low base. A number of large resource and infrastructure projects have commenced recently, and data on building approvals and commencements indicate that further strong growth in non-residential building work is in prospect. Of course, recent financial market developments and weaker global economic prospects might yet affect business spending plans, but the most recent business surveys, in the main, suggest a generally positive outlook. The large rise in housing construction has also made an important contribution to the strength of the economy over the past year. As well, the rise in house prices over much of the recent period has added to household wealth and boosted the capacity of households to borrow and spend. Investors have played a large part in the buoyancy of the housing market, accounting for virtually all of the growth in new finance approvals in the sector over the past year, presumably in expectation of strong growth in prices. It has been apparent, however, that this process would not be sustainable indefinitely, with emerging oversupply being bound at some point to limit the scope for further price increases. While most measures of housing prices rose strongly in the September quarter, there are some signs that price appreciation in particular sectors of the market is starting to abate. Prices of apartments have lagged behind house prices recently, and there are indications that apartment prices in parts of Melbourne and Sydney showed little, if any, increase in the September quarter. Recent anecdotal reports point to a more general waning of buyer interest, and there has been a noticeable decline in auction clearance rates in the past month or so. With regard to housing construction activity, the latest indicators have remained quite strong, with building approvals, and approvals for housing finance, generally moving higher in the September quarter. Hence, in the short term, housing construction activity is set to continue expanding. But given the emerging oversupply in the sector, housing activity now appears likely to begin declining in the first half of 2003. The rural sector is continuing to experience a severe drought, which will sharply cut rural production and incomes. It is now estimated that the decline in farm production could directly reduce GDP growth by around 1 percentage point over the current financial year. Despite higher prices for some rural commodities, notably wheat and wool, net farm incomes this financial year are expected to be down by more than half from the high levels seen last year. Drawing all these influences together, a modest slowing in domestic demand and output appears likely in the period ahead, principally reflecting the expected maturing of the housing cycle and the impact of the drought. While these factors have been evident for some time, they were initially expected to lead mainly to a rebalancing of growth, with the slowing in domestic demand being more or less offset by the impact of gradually improving external conditions. But with global economic prospects less assured, this expectation is unlikely to be met, and hence the economy overall can be expected to slow from its recent strong pace over the coming year. Recent data on inflation have been consistent with the near-term outlook described in previous Statements. The CPI increased by 3.2 per cent over the past year, while measures of underlying inflation, designed to remove the effects of extreme price movements, are currently in a range between 2½ and 3¼ per cent in year-ended terms. The Bank's assessment based on the range of available measures is that underlying inflation is currently around 2¾ per cent. With evidence that wages and upstream price pressures are subdued, and no sign of global inflationary pressures, underlying inflation is likely to remain close to its recent level of around 2¾ per cent during 2003. This represents a slightly lower forecast than was presented in previous Statements, reflecting the noticeably weaker outlook for the global economy and, consequently, the less favourable environment for growth in Australia. CPI inflation could remain a bit higher than the underlying rate in the short term, reflecting the influence of the drought on food prices. But looking further ahead, the prospect is that CPI inflation will converge towards the underlying rate and hence will be within the target range. The risks around this forecast appear evenly balanced. If a reasonably favourable international growth outlook were to eventuate, the domestic economy could continue to expand at close to its recent pace, and in that scenario inflationary pressures may be expected to build gradually. On the other hand, an extended period of weaker growth in Australia and abroad might see inflation pressures easing further. In its deliberations on monetary policy over recent months, the Board has taken into account the shifting prospects for the global economy as well as the range of important domestic influences on the economic outlook. These factors have been working in divergent directions, with the drought and the weak international environment subtracting from growth, while the stance of monetary policy and the dynamics of the housing market have been providing a stimulatory influence. The balance of these forces has shifted quite noticeably since mid-year. While initially it appeared that their net effect on the Australian economy in the medium term would most likely be in the direction of generating greater inflationary pressures, this became less clear as events unfolded during the second half of the year, as prospects for the global economy weakened. Some of this shift was already apparent at the time of the previous Statement in August, though subsequent events have suggested a further weakening in global prospects since that time. In view of these developments the Board at its recent meetings has judged that the most prudent course was to retain the current policy setting for the present time, while continuing to assess how the international and domestic economies evolve. Equity markets have remained at centre stage over recent months, with ongoing weakness in the major global indices accompanied by extreme volatility. The recovery from the post-September 11 lows started to peter out early in the June quarter, with broad measures of stock prices in the major economies having fallen sharply since. A rebound in late July and early August proved short-lived, with all of the major equity indices falling to new multi-year lows in early October (Graph 1). In the US and Europe the broad-based indices fell to levels last seen in 1997, while in Japan the Topix fell to its lowest level since 1984 (Graph 2). Although there was a modest recovery in equity prices in mid October, the US S&P 500 index remains down by 20 per cent since the start of the year, and the Euro STOXX and the Japanese Topix are down 29 per cent and 14 per cent, respectively. The S&P 500 and the Euro STOXX are now down 40 per cent and 52 per cent respectively from their March 2000 peaks, while the Topix is down 50 per cent since February 2000 and 69 per cent since its peak at the end of 1989 (Table 1). The falls on major equity markets have been accompanied by extreme volatility, with the median daily percentage change in benchmark US indices over the past few months higher than at any time since the 1930s (Graph 3). Within the sharp downward trend over the past half year, there have been two significant rallies. The S&P 500 increased by 21 per cent in a 4½ week period in July and August, and by 16 per cent in a 1½ week period in mid October. While these were significant movements, similar short-lived retracements have occurred in the past after large falls, without necessarily signalling the end to a bear market. While the falls in broad measures of US equity prices over the past two years are significant, they remain well within the historical experience. The most comprehensive way of comparing the current equity weakness to previous bear markets is in terms of the real total return on equities over different periods. This measure includes both the price change and dividends received by shareholders, and then adjusts for changes in consumer prices. There have been 12 periods when this measure showed negative returns greater than 20 per cent in the US since 1870 (the period for which reliable data exist) (Table 2). The loss suffered by investors in real total returns in the current episode has been around 40 per cent, slightly larger than the average for earlier bear markets, and the biggest fall since 1974. Since the S&P 500 peaked in March 2000, the current bear market has reversed around 5½ years of gains in real wealth, which is similar to the average for earlier declines. The global equity market weakness since early 2000 has been driven primarily by the recognition that earnings of companies have been too low to justify the high valuations reached. Concerns about the strength of the global economic recovery, particularly the possibility that the recovery may be stalling, are adding to the nervousness about share valuations since, in that event, the expected increases in profits may not eventuate. In the US the outlook has led to significant downward revisions to market earnings expectations over the past few months, with Standard and Poor's ‘top-down’ forecasts for ‘as reported’ earnings growth in the US for 2002 cut from 44 per cent to 24 per cent within a three-month period. Continuing doubts about the accuracy of corporate accounts in the light of recent accounting scandals have added to the uncertainty. There remain divergences between different measures of corporate earnings, with many market participants now focusing on the National Accounts measure of earnings which shows corporate profits to have fallen marginally over the first half of 2002, with earnings now having been essentially flat since the beginning of 1997 (Graph 4). Concerns about the traditional measures of earnings were highlighted by the release in October of Standard and Poor's new ‘core earnings’ measure, which attempts to focus on the ongoing operations of companies. Standard and Poor's estimate that there are two significant costs that are not fully accounted for under standard measures of earnings, including under Generally Accepted Accounting Principles, namely stock options and the cost of funding pension liabilities when pension funds have failed to cover these liabilities on an ongoing basis. They estimate that if stock options had been accounted for when issued they would have reduced ‘as reported’ earnings for the aggregate of the S&P 500 companies by around 20 per cent over the year to the June quarter 2002. The cost of topping up defined benefit pension schemes in the face of falling asset values would separately have reduced earnings by 25 per cent. While the price/earnings ratio based on trailing earnings for the S&P 500 has fallen to around 30, down from a peak of 47 earlier in the year, it remains well above the historical average of 15 and even the strong market expectations about earnings for the next 12 months imply a forward-looking ratio that is well above historical norms (Graph 5). Equity markets in Europe, Japan and elsewhere have been subject to many of the same pressures affecting the US market. In fact, European markets have fallen by more than those in the US. One factor that has contributed to the weakness has been concerns about the health of financial institutions, especially insurance companies. While the Euro STOXX had fallen 60 per cent from its peak by early October, the fall in insurance stock prices reached 70 per cent. Although on average bank stock prices have fallen about the same as the Euro STOXX, some of the major European banks had fallen by more than 80 per cent to their most recent trough in early October. In Japan, domestic factors have also contributed to weakness, with the possible impact of a more aggressive policy stance in dealing with problems in the financial sector contributing to falls in banking sector stocks. The broader market was also affected by this news on expectations that large numbers of companies could be affected by tightened credit standards in the banking sector. Asian emerging equity markets have generally followed the same pattern as markets in the major industrial economies over recent months, albeit with a smaller fall and more muted recovery (Graph 6). The Indonesian market was hit hard by the news of the Bali bombings, with share prices falling 10 per cent in the first day of trading following the attacks. Latin American markets have also generally moved with the major markets. In the US, the Federal Open Market Committee (FOMC) kept the policy rate unchanged at 1.75 per cent from December 2001 to November 2002, when it cut rates by a further 50 basis points (Graph 7, Table 3). As an indication of how views about the economy and markets have changed in recent months, market participants throughout the first half of 2002 were expecting the Fed to be increasing rates by the end of the year (Graph 8). These expectations started to recede in the third quarter, and were eventually replaced by expectations of easing; these were fulfilled in November. Policy interest rates in the euro area have remained unchanged for the past year. However, as in the US, financial market expectations for future policy changes have moved significantly over recent months. Whereas earlier expectations were that the next move would be an increase, the current expectation is that the ECB will cut its policy rate by 25 basis points, to 3.00 per cent, by December. In the UK the futures market is also signalling a cut of 25 basis points (to 3.75 per cent) in the policy rate by the end of the year, an expectation which was heightened by the release of the minutes of the October Monetary Policy Committee meeting which showed that three of the nine committee members voted for a 25 basis point cut. The Bank of Japan (BoJ) increased its reserves target from ¥10–15 trillion to ¥15–20 trillion in late October and increased its monthly purchases of Japanese Government bonds by ¥0.2 trillion per month to ¥1.2 trillion. It also made initial moves to address weakness in the financial sector by announcing a plan to purchase stocks from banks to reduce their exposure to equity market volatility. The tightening cycles in several of the smaller industrial economies that began over the first half of 2002 have paused over the past three months as the outlook for the global recovery has been downgraded. In addition to Australia's moves earlier this year (see chapter on ‘Domestic Financial Markets’), there were also increases in policy rates in Canada (a cumulative 75 basis points to 2.75 per cent), New Zealand (100 basis points to 5.75 per cent), Norway (50 basis points to 7 per cent) and Sweden (50 basis points to 4.25 per cent). Unlike in the larger economies, where expectations have been for monetary easing, market expectations in most of these mid-sized economies are for generally steady official rates in the months ahead, reflecting their relative economic strength. In New Zealand, the new Reserve Bank Governor and the Finance Minister signed a new policy targets agreement, which raises the lower range of the inflation target band to 1 per cent from zero and extends the time horizon for the target to the medium term. Official interest rates in most emerging economies have remained relatively steady over recent months. One exception was Brazil, where the central bank raised interest rates in October by 3 percentage points to 21 per cent, in a move aimed at stemming the slide of the real on the foreign exchange market. Bond markets in the major economies have largely taken their lead from equity markets over recent months. Government bond yields fell to very low levels, and spreads on corporate debt continued to widen, as investors sought the relative security of government debt. While these movements have reversed somewhat since mid October as the equity market has recovered, corporate debt spreads remain unusually elevated, reflecting ongoing concerns about financial fragility in the corporate sector. Yields on 10-year US government debt fell by around 90 basis points over August and September, to a 44-year low of 3.6 per cent. They rebounded sharply in mid October as equity markets recovered, but this proved temporary, and yields have since fallen back to 4.1 per cent (Graph 9). Using yields in the market for inflation-indexed securities, it is possible to decompose these movements in nominal yields into the change in inflationary expectations and the change in real yields on bonds. The data suggest that most of the fall in nominal yields in August and September was attributable to lower real yields, although reduced inflation expectations also played a role. This fall in real bond yields is consistent with both a flight to quality and security in the wake of large falls in equity prices over this period, and a lowering of real growth expectations for the US economy. US investors have been shifting money out of equity mutual funds into bond funds in recent months (Graph 10). Yields on European government bonds have moved in a similar fashion to US yields, although movements have been more subdued. Yields on German government bonds fell by around 40 basis points during August and September as equity markets weakened, the outlook for the European economic growth deteriorated, and market expectations of lower short-term rates increased (Graph 11). Yields reached a low of 4.3 per cent in late September, before rebounding in October on the turnaround in global equity markets. Yields on Japanese Government bonds have also fallen since August, declining by 35 basis points to around 1 per cent. Yields initially approached this level in mid September, but rebounded to 1.3 per cent after the Bank of Japan announced its initiative to buy equities from banks, a measure which was perceived as likely to reduce future BoJ bond purchases. Yields have since moved back to their lows as market participants have scaled back their perceptions about the likely size of BoJ stock purchases. The move lower was also assisted by the BoJ announcement that it will increase its monthly purchases of Government bonds, and indications from the Government that changes in the stance on resolving banking sector problems will not be accompanied by any major increase in debt issuance. In the US, corporate spreads to Treasuries have generally remained high over recent months, reflecting continued risk aversion and concerns over US corporate health (Graph 12). These concerns have been most notable for medium-grade credits; the spread between BBB-rated corporate bonds and 10-year Treasuries at one point in October reached 3.6 per cent, the highest level since the early 1980s. There seems to be a fear that companies in this grouping in particular are at risk of downgrades to ‘junk’ status. A prominent example is the very large rise in the spread for Ford Motor Co. debt, from 200 basis points mid year to around 600 basis points now. Emerging market sovereign spreads have narrowed over the past few months (Graph 13). Spreads on Brazilian debt rose to more than 2,000 basis points ahead of the October elections, but have since moderated, although at 1,700 basis points they remain at unsustainable levels (Graph 14). Spreads remain at default levels in Argentina, but have fallen slightly in other Latin American countries. Yields spreads on Asian sovereign bonds have also narrowed modestly since the time of the last Statement. In contrast to the extreme volatility seen in equity and bond markets, currency markets have been relatively stable over the past three months (Graphs 15 and 16). In trade-weighted terms the US dollar has fallen by around 1 per cent, with a 2 per cent appreciation against the yen, offset by an equivalent depreciation against the euro. Asian and Latin American currencies that float were generally weaker against the US dollar over the quarter. The Brazilian real weakened significantly through September on electoral uncertainty, and despite a recovery since then remains down 35 per cent for the year to date. Initially, as sentiment about the global economy began to deteriorate around mid year, the Australian dollar fell quite sharply, from around US57 cents to around US54 cents (Graph 17). This reversed about half the rise that had taken place during the phase of bullish sentiment about the global economy in the first half of the year. Since then, however, even though sentiment about the global economy has deteriorated further, the Australian dollar has not followed. In fact, it has appreciated by around 3 per cent against the US dollar over the past three months. The initial sharp reaction to the change in global sentiment may have reflected a view that the global slowing would flow through quickly to the local economy. The fact that many market participants had built up quite long positions in the Australian dollar during the first half of the year, in anticipation of global economic recovery, may have accentuated the move as these positions were adjusted. The relative resilience of the Australian economy to date has no doubt helped support the currency recently. One mechanism through which this is happening is the widening interest spread in the Australian dollar's favour, as interest rates around the world have fallen to exceptionally low levels. This is leading to strong demand for Australian dollar securities in overseas markets. One area where this has been particularly noticeable is among Japanese retail investors. For the year to date, issuance of A$ Uridashi bonds amounted to $10.4 billion, with most of the issuance occurring over the past four months. The recent period has been the strongest for Uridashi issuance on record. Also lending support to the currency was the upgrade in October by Moody's of Australia's foreign currency credit rating and country credit ceiling from Aa2 to Aaa (Table 4). Moody's had reduced Australia's rating from Aaa to Aa1 in 1986, with a further reduction to Aa2 in 1989. Standard and Poor's and Fitch also lowered Australia's rating over this period, though subsequently Standard and Poor's raised its rating from AA to AA+ (equivalent to a change from Aa2 to Aa1). On a trade-weighted basis the Australian dollar has appreciated by around 3 per cent over the past three months, with strongest gains against the Japanese yen and some other Asia-Pacific currencies (Table 5). The current level of the trade-weighted index is around 6 per cent below its 1990s average. The RBA has continued to purchase foreign reserves in the market over recent months. In net terms its outright transactions (and interest earnings) have lifted holdings of net reserves. At the end of October, net reserves were $11.1 billion, up from $7.0 billion at the start of 2002. Total reserve holdings have not changed much as there has been some reduction in foreign exchange held under swaps, from $29.5 billion in December 2001 to $26.2 billion. The global economy continues to recover at a modest pace, although sentiment in international financial markets remains fragile and the downside risks to the recovery appear to have increased recently. This is clearly affecting business and household confidence, and is evident in some recent economic indicators. Domestic demand continues to drive growth in the US, while exports have formed the basis of the very weak recoveries in both Europe and Japan. Exports have also played an important role in the pick-up in non-Japan Asia, though in contrast to earlier recoveries, domestic demand (and intra-regional trade) is making a sizable contribution to growth. Forecasts for world growth have been revised down in recent months. In its latest assessment of the international outlook, released in September, the IMF lowered its forecast for growth in the G7 countries in 2003 by ½ percentage point, to 2¼ per cent on a year-average basis (Table 6). The latest private-sector Consensus forecasts present a similar view in aggregate for 2003. The recovery has continued in the US, though the growth has been uneven. After a weak outcome in the June quarter, GDP increased by 0.8 per cent in the September quarter to be 3.0 per cent higher over the year (Table 7). However, the pace of growth slowed through the quarter. Nearly all of the growth in the September quarter was accounted for by consumption, which was boosted by a surge in purchases of motor vehicles. Household spending remains firm, supported by low interest rates and continued growth in household disposable income. The fall in long-term interest rates, in particular, has encouraged borrowers to refinance existing home loans at a lower cost. House prices have also risen, allowing home owners to use the increased equity to finance consumption. Marketing incentives, such as zero-interest financing for motor vehicles, have also boosted spending. Notwithstanding these positive factors, the fall in equity prices has reduced household wealth over the past couple of years, particularly for higher income households. Consumer sentiment has also fallen to well below average levels. After shedding jobs through much of 2001 and into the early part of this year, the US labour market recovered for a time, but employment flattened out again in September and October. Manufacturing employment has continued to fall, while employment in the public and services sectors has grown moderately. In contrast to the resilience of household spending, conditions in the business sector remain subdued. Industrial production has fallen in recent months, reversing some of the rise that had occurred over the previous six months (Graph 18). The most recent fall was largely associated with a decline in the production of motor vehicles, which had been boosted around the middle of the year when large sales incentives were re-introduced. There has also been a deterioration in manufacturing sentiment, with the ISM measure returning to levels associated with stable output, after rising strongly around the turn of the year. Orders for non-defence capital goods declined marginally in the September quarter. Notwithstanding these developments, with the current level of inventories relative to sales remaining at a low level, any growth in demand should be quickly reflected in production in the period ahead. Corporate profits, as measured in the national accounts, have begun to increase after falling quite markedly over the past five years. However, the ability of the larger corporations to raise funds is being adversely affected by market perceptions of corporate balance sheet fragility, owing in part to the shortfall in defined benefit superannuation schemes. The ratio of business investment to GDP has now reached quite low levels, which implies some unwinding of the investment overhang built up in earlier years, but significant excess capacity remains in a number of capital intensive sectors. US fiscal policy over the past year has been strongly supportive of growth, with the turnaround in the budget balance the largest since 1975. The more expansionary state of fiscal policy this year owes mainly to the effect of automatic stabilisers, as well as tax cuts and increased government expenditure, particularly for defence. Monetary policy also remains accommodative, with the Fed easing by a further 50 basis points in November. Consumer price inflation has fluctuated with movements in energy prices, but has remained below 2 per cent in year-ended terms (Graph 19). The core measure of inflation has edged lower and was 2¼ per cent in September. The large gap between services and goods inflation remains. Services inflation slowed to a little over 3½ per cent in the year to September, while core goods prices fell by around 1 per cent. Growth in labour compensation has eased with the employment cost index rising at a year-ended rate of 3.7 per cent in the September quarter; growth in the wages component has slowed a little from earlier in the year, while benefits have continued to grow strongly. Having recorded declines in output in the four previous quarters, the Japanese economy returned to growth in the June quarter, with GDP rising by 0.6 per cent (Table 8). Sizable revisions to the national accounts data eliminated the strong growth that was originally reported for the March quarter, while also indicating that the contraction in 2001 was deeper than previously thought (Graph 20). Exports have been a significant driver of growth, and while there have been modest increases in private consumption, both business and residential investment have continued to fall. There are some positive signs, with industrial production increasing in recent months, although the pace of growth has been somewhat slower than earlier in the year. This pattern is evident in the Tankan survey, with business sentiment and investment intentions having recovered from their lows towards the end of 2001, but with the rate of improvement slowing more recently. Exports have declined over the past couple of months, though they remain at high levels. Machinery orders, after falling through much of 2001, appear to have stabilised, albeit at a low level. Outside of the manufacturing sector, the tertiary activity index has been broadly flat over the past year. Conditions for the household sector remain poor. While employment has risen in recent months, with employment in September ½ per cent higher than the trough in May, the unemployment rate has remained around its historical high of 5½ per cent (Graph 21). The decline in compensation has accelerated, owing mainly to a reduction around the middle of the year in bonus payments, which are typically linked to corporate profitability. Deflation continues, with consumer prices declining by 0.7 per cent over the year to September. Over the past five years the price level, as measured by the consumer price index, has fallen by nearly 3 per cent. Non-Japan Asia has been an area of relative economic strength so far in 2002. The Chinese economy is continuing to grow at around 8 per cent per annum, according to official data. Excluding China, output in the region increased for a fourth consecutive quarter in June, to be 4½ per cent higher over the year. While some of this growth has been sourced from the external sector, a sizable proportion is due to rising domestic demand (Graph 22). Consumption was a major contributor to growth in most countries over the year to June, with business investment also rising strongly in the first half of this year. Growth in Korea has been particularly robust, owing mainly to rapid growth in consumption, which has been supported by a strong pick-up in household borrowing and low unemployment. In contrast, Hong Kong remains weak with domestic demand falling significantly over the year to June. Growth in Singapore also faltered in the September quarter. Manufacturing production in non-Japan Asia has expanded rapidly over the past year (Graph 23). While China accounted for much of the strength, output in the rest of non-Japan Asia has exceeded its previous peak in the middle of 2000. Reflecting increased domestic demand, manufacturing production has generally grown faster than exports over the past couple of years. Services sector output has also increased, while, in contrast, the construction industry has continued to weaken. Labour markets in most countries in the region continue to improve, with unemployment rates generally declining. China, Hong Kong, Singapore and Taiwan are experiencing mild deflation on a year-ended basis. In New Zealand output rose by 1.7 per cent in the June quarter to be 4.0 per cent higher over the year. Growth in both domestic demand and exports has been robust over the past year. Inflation, at 2.6 per cent over the year to the September quarter, has been steady for most of this year. A change to the Reserve Bank of New Zealand policy objective was announced in September, with the focus moving to a more medium-term approach to achieving price stability. The bottom of the inflation target was also raised from 0 to 1 per cent, with the 3 per cent upper limit retained. Europe has yet to show signs of a significant recovery. GDP in the euro area increased only modestly in the first half of the year, to be 0.7 per cent higher than a year earlier (Table 9). For the first time in nearly a year household consumption contributed to growth in the quarter, though investment fell for the sixth consecutive quarter and is nearly 3 per cent lower over the year. Exports remain the main source of growth. Across the major economies, growth remains weaker in Germany and Italy, while more robust consumption growth has driven stronger outcomes in France and Spain. More recent data have been disappointing. Growth in industrial production appears to have stalled, following a moderate pick-up in the early part of the year. With financial stresses of the type hampering the US recovery at least as severe in Europe, measures of business sentiment have retraced some of their earlier gains. Current business conditions, as measured in the German IFO survey, have remained weak, while some of the optimism evident in the expectations component of the survey earlier in the year, has been wound back (Graph 24). A similar pattern is evident in the other major euro area economies. The export sector remains an exception, with exports rising over much of this year. The labour market has remained relatively resilient despite the slow pace of growth, with the unemployment rate for the euro area in the September quarter only 0.3 percentage points above the cyclical trough last year. However, the aggregate numbers mask differing developments amongst the major economies. In Germany, unemployment has risen by 0.7 percentage points from its low in 2001, while unemployment in Italy has continued to fall. Consumer sentiment has also unwound most of the gains from earlier in the year to be marginally above its trough late last year. Year-ended consumer price inflation has picked up slightly in the past few months to be a little over 2 per cent, after falling in the early part of the year (Graph 25). This mostly reflects the impact of higher oil prices, as core inflation (which excludes food and energy) has remained steady at just under 2½ per cent. In Germany, year-ended inflation is currently around 1 per cent, the lowest rate in the euro area, while it is around 1¾ per cent in France and 2¾ per cent in Italy. Annual wages growth in the euro area has remained a little under 4 per cent, after accelerating over the past few years. The European Commission has indicated that some requirements of the Stability and Growth Pact are likely to be modified, as weak economic conditions have made the 2004 deadline for countries to balance their budgets unrealistic. A relaxation of the deadline would give the French and Italian governments some scope to proceed with promised tax cuts. In the United Kingdom GDP grew by 0.7 per cent in the September quarter, to be 1.7 per cent higher over the year. Growth in the services sector in the quarter continued at a robust pace and the manufacturing sector expanded for the first time since 2000. Growth in household consumption remained firm, supported by rapid growth in house prices, which are over 20 per cent higher than a year ago. Rises in house prices have also boosted dwelling investment in recent quarters. The labour market has been resilient, with unemployment just above historical lows in the September quarter, as jobs have continued to be created in the services sector. Year-ended inflation (excluding mortgage payments) was around 2 per cent in recent months, as falling goods prices have offset increasing services prices. According to the latest national accounts, real output rose by 0.6 per cent in the June quarter, to be 3¾ per cent higher than a year earlier (Graph 26). While this represents a somewhat slower pace of growth than was recorded in recent quarters, the aggregate figure masks sharply contrasting trends in domestic demand and exports. Domestic final demand has expanded by 6.9 per cent over the past year, close to its strongest annual pace of growth over the past decade. The recent strength in domestic demand has reflected the combination of continued rapid growth in consumer spending, the upswing in housing activity and a pick-up in business investment. In line with the strength in domestic demand, import volumes grew by 12 per cent over the year while exports declined by more than 1 per cent, owing to the weak global economy. Consequently, net exports subtracted 3 percentage points from GDP growth over the year to June (Table 10). The outlook for the Australian economy remains quite favourable, although growth will be reduced by the impact of the drought on the farm sector. In addition there is likely to be some rebalancing of growth, with domestic demand slowing while net exports continue to reduce growth, but by a lesser amount than they did over the past year. Household consumption spending continued to grow robustly in the June quarter, rising by 1½ per cent to be 4½ per cent higher than a year earlier. Spending on household goods was especially strong, in line with buoyant house-building activity. Recent indicators suggest that consumption has continued to rise at a good pace, with the volume of retail sales up by 0.7 per cent in the September quarter (Graph 27). Motor vehicle sales to households fell slightly in the September quarter but remain at a high level. Consumer spending has been supported by rising household incomes and house prices. Consumer sentiment has eased back in recent months, but is still slightly above its long-run average. The value of household assets rose by 12¼ per cent over the year to June and has averaged more than 10 per cent per annum over the past five years (Table 11). In recent quarters the large increases in house prices have more than offset the effect on aggregate household wealth of the falls in the value of equity holdings, which account for a smaller share of the total. This pattern continued in the September quarter, with indicators suggesting a further strong rise in house prices, while equity prices fell by 7½ per cent. Rises in the value of household assets and low financing costs have encouraged households to finance consumption through borrowing. Over the year to September, household borrowing rose by around 17¾ per cent. While the bulk of this borrowing is to finance dwelling acquisition, some part of that recorded as borrowing for housing is likely to have been used to fund consumption. Products and financial services such as home-equity loans and redraw facilities have improved the ability of households to borrow against their equity in property. Overall, growth in household debt outpaced that of household assets over the year to the June quarter, and as a result the ratio of debt to assets increased to 15 per cent from 14½ per cent in the previous year. However, debt service payments have remained relatively stable at around 6 per cent of household disposable income in recent quarters (Graph 28). Activity in the housing sector has expanded strongly over the past year with dwelling investment rising by 4¾ per cent in the June quarter, to be 30 per cent higher over the year. Leading indicators for housing suggest that this high level of activity is likely to be sustained until the March quarter of 2003. However, the indicators suggest somewhat divergent trends in the period ahead for the detached housing market, which is predominantly owner-occupied, compared with the multi-unit sector, which tends to have a greater share of investors. Activity in the detached housing sector appears to be close to its peak. Building approvals for the construction of new houses have stabilised at a high level over the past year, notwithstanding a temporary surge in August (Graph 29). The number of new loan approvals for the construction of housing for owner-occupation has declined over the course of 2002. In part this reflects the cessation at the end of June of the Commonwealth Additional Grant to first-home buyers building a home – the share of first-home buyers obtaining a loan approval has fallen noticeably over the past year. But it is also consistent with the pattern observed in previous housing cycles that the owner-occupied market tends to peak prior to the investor segment of the market. In contrast, leading indicators of activity in the multi-unit sector have picked up again recently. Approvals for the construction of multi-unit housing in the September quarter were nearly 18 per cent higher than a year earlier. The recent increase in approvals was spread across most of the capital cities. Loan approvals to investors for new construction have also risen sharply in recent months, increasing by 15 per cent over the three months to August. Lending for housing is still being pushed along by historically high activity in the investor market. Over the past year, loan approvals to investors have risen by 42 per cent compared with 1 per cent for owner-occupiers over the same period (Graph 30). Thus virtually all the increase in approvals over the past year is attributable to investors. A number of factors seem to be contributing to the surge in lending to investors. Low interest rates and eager mortgage lenders have made finance readily available. The relatively poor performance of equity markets over the past couple of years has also increased the attractiveness of investing in property and made it easier for those organisations aggressively marketing property as a tax-effective investment. Finally, the large increases in property prices in the past have led investors to assume that these will continue in the future, notwithstanding the increasing signs of oversupply in the investor property market. These developments share a number of similarities with those in the late 1980s (Graph 31). In both episodes, there was an increase in the appeal of housing as an investment option in the wake of a downturn in equity markets. Consequently, there was a marked increase in lending to investors in both periods, although investor participation has been greater in the current episode. Most indicators of house prices show strong rises in the September quarter following on from the large rises over recent years (Table 12). In Sydney, Melbourne and Brisbane – the cities for which there are the most complete data – there is a clear tendency for house prices to have risen further than unit prices. There is also evidence that, in some cases, unit prices have been flat or have fallen. The real estate consultants Residex collect a series for prices based on repeat sales; that is, they follow the prices of individual properties that have changed hands at least twice in recent years. This suggests that although house prices rose in the September quarter, the prices of units were flat in Sydney and fell in Melbourne. Thus the part of the property market to which most of the investor borrowing is being directed is the part where prices are rising least, if at all. In line with strong growth in domestic demand, businesses have generally experienced favourable conditions through the past year, though conditions have varied significantly between sectors. Growth in the goods-producing sector has been robust, with strength in the construction, manufacturing and retail sectors, while the service sectors have continued to slow (Graph 32). This is particularly so for the property and business services sector, which has been affected by reduced spending by firms on, inter alia, consulting, marketing and technical services. Most measures of business conditions remain at levels consistent with trend growth in the non-farm sector (Graph 33). The broadly based NAB survey suggests that business conditions continued to improve in the September quarter, particularly in sectors such as construction, retail and wholesale trade. In the manufacturing sector, the ACCI-Westpac and AIG surveys report conditions being slightly above average despite easing recently. Consistent with the recovery in business investment, capital spending intentions remain high. While measures of business confidence have fallen in recent months, they generally remain at or above long-run average levels. The farm sector remains a notable exception and conditions have deteriorated further in recent months as the drought has persisted. The June quarter Rabobank Rural Confidence Survey recorded a large increase in the number of respondents expecting investment in the agricultural economy and farm incomes to weaken over the next 12 months (Graph 34). Confidence has fallen across all parts of the country, with the sharpest falls in New South Wales owing to the severity of drought in that state. The dry conditions curtailing farm output will limit rural exports (see the chapter on ‘Balance of Payments’) and farm incomes and those of businesses connected to the rural economy, with adverse consequences for overall economic growth (see ‘Box A: Economic Effects of the Drought’ for further details). Reflecting the strong conditions in the non-farm sector, corporate profits as measured by gross operating surplus (GOS) were around 12 per cent higher over the year to June, despite falling slightly in the June quarter. Profits of domestically oriented industries continued to rise in the June quarter, benefiting from the strength in domestic demand. However, this was more than offset by a decline in mining profits, reflecting lower export volumes, lower US dollar prices for some resources and the higher Australian dollar prevailing in the first half of the year. Nonetheless, the level of mining profits remains high by historical standards. Small-business profits also declined slightly in the June quarter, with a decline in rural incomes more than offsetting continued strength in the retail and residential construction sectors, but have grown by about 12 per cent over the year. Business surveys show that firms generally remain confident about the profit outlook. In addition to the internal funding provided by the rise in profits over the past year, businesses have also increased their capital raisings from external sources. Business borrowing from intermediaries grew at an annualised rate of 8 per cent over the six months to September, after being broadly flat over the preceding six-month period. Direct capital raisings through both non-intermediated debt and equity issuance increased more modestly over this period, possibly reflecting the decline in equity markets and a widening in corporate bond spreads. The overall financial position of the business sector remains in good shape. Debt levels relative to gross operating surplus have been broadly constant in recent years, following significant reductions in leverage in the early 1990s, and they remain low by historical standards. Interest payments expressed as a share of profits remain at low levels. The combination of ready access to funding at a relatively low cost and strong domestic economic growth has been conducive to a solid recovery in investment spending. Business investment rose by nearly 15 per cent over the year to the June quarter, led by strong increases in spending on both machinery and equipment and buildings and structures; growth in computer software investment picked up only modestly through this period. Despite this growth over the past year, investment as a share of GDP still remains at a relatively low level (Graph 35). Indicators of investment intentions point to further strong growth in business investment in 2002/03. The June quarter ABS capital expenditure (Capex) survey suggests that investment in machinery and equipment is expected to grow by around 13 per cent in nominal terms this financial year, assuming a five-year average realisation ratio; private-sector surveys also report robust investment intentions. The pick-up in equipment investment spending is expected to be particularly strong in the mining, manufacturing, and transport and storage industries. However, weaker spending on agricultural equipment, owing to drought conditions and an expected fall in agricultural income, is likely to provide a restraining influence on overall investment in machinery and equipment in the current financial year. Investment in buildings and structures has also risen strongly over the first half of 2002. Favourable forward-looking indicators and a large build-up of work in the early stages of construction imply that growth in buildings and structures investment should continue over the coming quarters. Furthermore, state governments have recently announced a suite of major projects, some with private-sector involvement and largely concentrated in the transport and utilities industries, which will result in a significant increase in investment. The Access Economics Investment Monitor confirms the favourable outlook for buildings and structures investment, with work about to begin or recently having commenced on a large number of resource-related projects, many public infrastructure projects, and on office construction. Looking further ahead, a number of large-scale engineering projects are in prospect as a result of Australia LNG securing a contract to supply LNG to China for 25 years, beginning in about the middle of the decade. After a period of slower growth around the middle of the year, employment increased by 0.6 per cent in the three months to October, and is now 2.0 per cent higher than the same period last year (Graph 36). Full-time employment rose by 0.6 per cent in the three months to October to be 1.1 per cent higher than a year ago, while part-time employment has remained strong, rising by 0.7 per cent in the three months to October, to stand 4.5 per cent higher over the year. These gains in employment have been reflected in the unemployment rate, which has fallen by 1 percentage point since the beginning of the year. Improved labour market conditions have been evident in most states. Queensland recorded the fastest employment growth in the year to the three months to October, rising by 3.1 per cent, and, consistent with this, recorded the largest unemployment rate fall across all states (Table 13). Employment growth in South Australia and Western Australia has also been strong, at 2.6 and 2.4 per cent in the year to the three months to October. While employment growth has been slower in Victoria and NSW, these states have recorded the lowest unemployment rates. Tasmania continues to record a higher unemployment rate than the rest of the country. The decomposition of employment growth by industry reveals trends generally in line with the pattern of output (Table 14). Employment in the retail and wholesale trade sectors, which increased by 0.8 per cent in the September quarter, continues to be supported by strength in consumer spending. There has also been a recovery in manufacturing employment, with a 2 per cent rise in the quarter, following two previous consecutive quarters of growth. However, employment remains weak in industries that have been most exposed to the downturn in tourism, such as accommodation, cafes and restaurants and transport and storage, which includes airline travel. Furthermore, employment in the rural sector fell sharply in the quarter, to be well down on levels of a year ago, with much of the fall concentrated in NSW and Western Australia, reflecting the initial effects of drought conditions. Labour productivity growth measured on an output per person employed basis has slowed somewhat from the rapid pace recorded in the second half of last year. In the June quarter this measure of productivity increased by 0.4 per cent, to be 2.2 per cent higher over the year. Productivity measured on an hours-worked basis also increased by 0.4 per cent in the quarter, and by 3.2 per cent in year-ended terms. Forward-looking indicators of labour demand have generally improved over the past few months, and remain supportive of a continued expansion in employment in the near term (Graph 37). The ABS employer-based measure of vacancies picked up in the September quarter to reach levels considerably higher than a year earlier. After patches of weakness in the middle of 2002, measures of print-based vacancies have also reported more positive outcomes, restoring the upward trend evident in these series since the beginning of the year. The ANZ newspaper-based series posted a 4.4 per cent increase in October, to be 17.8 per cent higher than the same time last year. In contrast, skilled vacancies data compiled by the Department of Employment and Workplace Relations (DEWR) fell slightly in October, but are almost 20 per cent higher than a year ago. Employment intentions for the December quarter reported in the NAB and ACCI-Westpac surveys are well above long-run levels, while intentions reported in other surveys, such as Dun & Bradstreet, remain around long-run levels. Over the course of this year, conditions have been extremely dry across a wide stretch of the country. The drought has severely reduced winter-crop production and brought forward livestock slaughtering, and will significantly curtail farm output and incomes. The direct effects of the drought will be most evident in a decline in agricultural production and an associated reduction in rural exports (around two-thirds of agricultural production is exported). However, fluctuations in rural exports tend not to be as pronounced as those of agricultural production as rural exports include output of the forestry and fishing industries, as well as processed agricultural products. In addition, agricultural inventories tend to be run down during periods of drought. The reduction in farm incomes will translate into lower farm consumption and investment (farm equipment is, on average, around 8 per cent of total machinery and equipment investment spending). Although difficult to quantify, the drought will also have indirect effects on the economy, most particularly in those industries that supply and service agriculture, such as the wholesale and transport sectors, as well as retail operations in rural areas. The adverse effect of a drought on production varies across the different parts of the farm sector. Drought conditions lead to an immediate reduction in grain production, whereas a downturn in meat production tends to occur with some delay as farmers initially increase slaughter rates in response to the rising cost of feed. Conversely, crop production recovers faster than meat production following the breaking of a drought, with meat production delayed by the need to rebuild stock numbers. Similarly, there are differing effects on rural commodity prices, with wheat and other grain prices initially rising, reflecting the reduced supply, and meat prices initially falling; the reverse price movements typically occur following the cessation of the drought. To provide some guidance as to the likely effects of the current drought on the economy, it is useful to examine the effect of earlier droughts. Although changing weather conditions are an ever-present source of volatility in agricultural production, two particularly severe droughts are identifiable in the past 20 years: the first in 1982–1983, which affected eastern and southern Australia; and the second, a series of low-rainfall years from 1991 to 1995, during which several regions across Australia experienced varying degrees of drought conditions at different times. In both episodes, agricultural production and rural exports declined significantly and then recovered strongly following the breaking of the drought (Graph A1). The sharp fall and subsequent rise in agricultural production during the 1982–1983 drought first subtracted, and subsequently added, around 1–1½ percentage points to GDP growth (Graph A2). In the 1991–1995 drought, GDP was reduced by around ½–¾ percentage point in both 1991/92 and 1994/95, and was subsequently boosted by around ¾ percentage point in 1995/96. Based on the latest Australian Bureau of Agricultural and Resource Economics (ABARE) estimates of crop and livestock production, agricultural production could fall by close to 15 per cent in 2002/03, more than half the fall experienced in 1982/83 and close to the falls in 1991/92 and 1994/95. The smaller expected fall in production in the current episode in part reflects improvements in cropping techniques since the 1980s, which have enabled farmers to cope more effectively with adverse weather conditions. Another factor reducing the impact on the aggregate economy is that the share of agricultural production in GDP has fallen from around 6 per cent in the early 1980s to just over 3 per cent in recent years. Nevertheless, the forecast decline in production would still directly subtract around ½ a percentage point from aggregate economic growth in 2002/03 and as much as 1 percentage point from growth over the year to June 2003. Fluctuations in farm incomes – that is, the proceeds of sales net of operating costs – tend to be of considerably larger magnitude than the fluctuations in production. In September, ABARE estimated that farm incomes will be around 60 per cent lower in 2002/03 than the relatively high levels in the previous year. To some extent, the increased use of Farm Management Deposits (FMD) in recent years has provided scope for farmers to smooth their income and expenditures. The recent high levels of farm income have facilitated a large build-up of FMDs, which should help insulate farmers from some of the effects of the expected fall in earnings. Nonetheless, a significant decline in expenditure by rural producers can be expected as a consequence of the drought. While the drought is a serious negative shock to the economy, past experience suggests that there will subsequently be a significant boost to growth when the drought breaks, as crop production typically rebounds strongly following drought years. In the past, El Niño episodes have tended to start towards the middle of the calendar year and last a little less than 12 months. At this stage, the Bureau of Meteorology expects drier-than-average conditions to persist in much of the country for at least the next several months. With the Australian economy continuing to grow faster than a number of its major trading partners, the trade deficit has widened over the past year, to be around 1½ per cent of GDP in the September quarter. Assuming that the net income deficit as a proportion of GDP remained constant, this would suggest a current account deficit of around 4¼ per cent of GDP in the September quarter (Graph 38). Unlike the experience in previous international downturns, Australia's terms of trade have remained relatively stable during the recent period at around their highest level for over 10 years. This largely reflects the high prices received for a number of commodity exports, as well as declining import prices, particularly for electronic equipment. The strength of Australia's terms of trade has helped to limit the cyclical widening in the current account deficit. Reflecting the weak world economy, the value of exports has fallen by about 3¾ per cent over the year to the September quarter. Much of this decline was concentrated towards the end of last year, when merchandise exports to most markets fell. Since then, growth in merchandise exports has varied across markets (Table 15). Strength in domestic demand has supported rapid growth in exports to Korea and China, while exports to New Zealand have picked up strongly. A recovery is also evident in merchandise exports to Japan. In contrast, exports to the US remain weak, owing to sizable falls in exports of motor vehicles, pharmaceuticals and meat. There have been large declines in exports to India and the Middle East, though this followed very strong rises in previous years. After declining sharply in the second half of 2001, the value of resource exports has subsequently recovered, increasing by about 7½ per cent over the past three quarters (Graph 39). A large rise in the value of exports of oil and LNG accounted for around half of the increase, mainly reflecting increases in oil prices. In contrast, receipts from coal exports have fallen as subdued growth in world industrial production and an unusually mild northern hemisphere winter have pushed thermal coal prices lower. The value of base metals exports also fell over the three quarters to September, owing partly to softer prices for a number of base metals. In the period ahead, the Australian Bureau of Agricultural and Resource Economics (ABARE) expects nickel and zinc production to be boosted by the introduction of new capacity. In August, the Australia LNG consortium won the right to supply LNG to China. The contract to supply over 3 million tonnes of LNG annually to China's Guangdong province over 25 years, beginning in 2005/06, is reported to be worth between $20 billion and $25 billion. According to ABARE, LNG exports, which account for about 4 per cent of the value of resource exports, are expected to double over the next five years. In order to meet supply commitments, the consortium is expected to invest in a fifth LNG processing train. The value of rural exports fell by about 2½ per cent in the September quarter to be around 13 per cent lower over the year. The value of meat exports fell by about 8 per cent in the September quarter, and was more than 17 per cent lower than a year earlier, partly reflecting reduced demand from Japan. The capacity for Australian beef exports to be diverted to other markets is limited, given the existence of quotas in the US, the other major destination for Australian meat exports. Beef prices have declined, exacerbated by drought conditions in the US and Australia, which have led to increased livestock slaughterings. The value of wool exports has risen over the past few quarters, with higher prices offsetting the reduction in flock size. The value of cereal exports fell by around 1 per cent in the September quarter. The drought is expected to lead to a significant decline in rural exports in coming quarters (see Box A for a discussion of the economic effects of the drought). ABARE recently revised down its forecast for wheat production in 2002/03 to just over 10 million tonnes, compared with 24 million tonnes in the previous financial year (Graph 40). Weak growth in our trading partners continues to adversely affect manufactured export earnings, with the value of manufactured exports only 2½ per cent higher than a year ago, well down on the average annual rate of growth of 9½ per cent over the past decade. Exports of machinery have declined over the past year, falling by around 4½ per cent over the year to the September quarter. However, receipts from the export of transport equipment continue to rise and have more than doubled over the past six years. Service exports also continue to bear the effects of slow growth in the world economy and international trade. They rose by about 1½ per cent in the September quarter but remain about 3 per cent lower over the year. Weakness in service export earnings over the past year has been fairly broadly based, with the value of transportation and travel services exports both falling. Strong growth in domestic demand, coupled with falls in import prices, has resulted in robust growth in import volumes over the past year, with import values around 8 per cent higher over the year to the September quarter. Reflecting the pick-up in business investment, growth in the value of capital imports has been particularly strong, rising by 17 per cent over the year to the September quarter (Graph 41). Imports of civil aircraft have been the largest contributor to the increase over the past year. The value of consumption imports has also risen strongly, and in the September quarter was 15 per cent higher than a year earlier, partly a result of rising imports of motor vehicles. The net income deficit narrowed slightly in the June quarter to 2.8 per cent of GDP, around the level of the past four years. The ratio of net interest payments to exports was also little changed at around 9 per cent. There were large downward revisions to the estimate of net foreign equity liabilities from the September quarter 1998 owing to improved recording of foreign assets held by Australian fund managers (Graph 42). The revised data show that net foreign liabilities have been broadly unchanged at around 55 per cent of GDP for nearly 10 years, in comparison with previous data, which suggested that they had increased over the past few years. In the June quarter, Australia's net foreign debt rose slightly to around 46 per cent of GDP, reflecting a continuation of the recent trend of strong net debt inflow. Net foreign equity liabilities also rose, primarily due to valuation effects, with the Australian share market outperforming those abroad in local currency terms. Net equity outflow was again recorded in the June quarter, consistent with the pattern of the past few years during which flows of Australian offshore equity investment have been strong. In aggregate, commodity prices have been fairly steady, with the RBA Commodity Price Index falling in SDR terms by 0.2 per cent in the three months to October, though the Index rose by 3.4 per cent in Australian dollar terms (Graph 43). However, this masks some divergent trends across commodities. Rural commodity prices have increased sharply over the past few months to reach their highest level in over 12 years (Graph 44). Wheat prices recorded significant gains, with unfavourable weather conditions in Australia, North America and Europe constraining world supply. In contrast, beef and veal prices have fallen over the past three months because of the drought, which continues to induce higher slaughter rates both in Australia and the US. Strong demand for wool, coupled with an Australian clip that is expected to be the lowest in five decades, has pushed wool prices higher, with the price around 40 per cent higher than a year earlier. These pressures are expected to moderate, however, as processors substitute away from wool to synthetic yarns. Sugar prices have risen over the past few months from very low levels, though these gains are not expected to be maintained as world sugar production is forecast to be close to record levels in 2002/03. Base metals prices fell in the three months to October to levels near their previous trough in October 2001, largely as a result of the slow recovery in world industrial production. Despite falling over the past six months, nickel prices are still around 27 per cent higher than a year ago, owing mainly to strong demand and tight supply. Most other base metals prices are well below levels of a year ago. Iron ore prices have fallen for most of this year, though they have stabilised in recent months, with increased Chinese steel production raising demand. Falls in prices for coking and thermal coal have also abated recently in line with rising oil and gas prices. Recent movements in the gold price have reflected equity market fluctuations, though tensions in the Middle East have kept the price at a relatively high level. Geopolitical tensions have also been a factor in the volatility in oil prices over the past few months (Graph 45). Other factors influencing the price have included the low levels of US oil inventories and OPEC's decision not to increase its existing production quotas. There has been a small decline in short-term interest rates since the last Statement (Graph 46). For much of the past three months, however, market expectations were for the cash rate target to be unchanged in the near term but to rise by 25 basis points some time prior to the middle of 2003. This view was underpinned by a reasonably constant stream of solid domestic economic news, counterbalanced by concerns about the future impact on the domestic economy of a possible deterioration in global growth prospects. More recently, however, expectations of a rise in the cash rate target in the foreseeable future have disappeared. This shift in expectations has occurred as markets have given greater weight to the possibility that the domestic economy will be weighed down by developments abroad. Long-term market rates have fallen considerably over the past six months, and have moved within a relatively wide range since the previous Statement. Yields on 10-year government bonds stood at around 5.60 per cent in early November, almost one percentage point below the 2002 high of 6.50 per cent recorded in April (Graph 47). In late September bond yields fell to an intra-year low of 5.25 per cent. The driving force behind this decline seemed to be the large falls in share prices in the US and many other major countries, which led to rising risk aversion among investors and concerns about the extent to which global economic growth would be affected. Geopolitical risks added to the uncertainty. The decline in bond yields was partly reversed in mid October, when the recovery in the US stock market caused US bond yields to rise sharply. The yield on Australian 10-year government bonds rose by 50 basis points in a matter of days, to almost 6 per cent. This increase was, however, short-lived with concerns about the global economy again rising to the fore. While day-to-day movements in Australian bonds closely followed those in the US market, the spread between long-term bond yields in Australia and the US widened considerably in net terms over the period, at one point reaching almost 200 basis points (Box B). From mid October, however, the spread narrowed to around 165 basis points, but remains considerably above the average levels of the past few years. Spreads between yields on Australian corporate and government bonds have widened in recent months (Graph 48), but much of the increase is due to investors' concerns regarding overseas companies (particularly US financial institutions) which have issued bonds in the Australian market. While spreads averaged across all issuers have widened by 10 to 20 basis points, spreads on bonds issued by domestic corporates have risen by just 5 to 10 basis points. Overall, the rise in spreads in the corporate bond market has been much less than that in the US market (Graph 49). Intermediaries' variable indicator interest rates were unchanged during the three months to end October, reflecting a constant cash rate (Table 16). Banks' fixed rates for housing and small business have fallen slightly, although not by as much as the market yields against which they are priced. Although fixed-rate mortgages are now slightly cheaper than banks' standard variable-rate mortgages, the proportion of new loans that are at fixed rates has fallen from 9 per cent in June to 6 per cent currently. Issuance of domestic non-government bonds in the September quarter rose to $9.7 billion, 15 per cent above the average level of the past year. There was particularly strong growth in the asset-backed category, with record issuance of $5.8 billion, around double the rate of a couple of years ago (Graph 50). The market has digested this increased supply reasonably well, partly helped by interest from foreign investors. An increasing range of issuers is tapping the market. In particular, there were several issues by non-conforming lenders (those that offer mortgages to relatively high-risk borrowers). In addition, the September quarter saw the first issue of securities backed by loans to medical professionals. In contrast, domestic corporate bond issuance was well down in the September quarter on levels seen earlier this year, although in October issuance picked up due to a $1.5 billion issue by a transport infrastructure company (Table 17). The fall in the September quarter can, in part, be attributed to the recent volatility in financial markets which has led some companies to defer or cancel planned raisings. Notwithstanding this, companies with good credit ratings and recent strong share price performance that have approached the market have found their issues oversubscribed. Issues during the September quarter all carried a credit rating above BBB. The total amount of domestic non-government bonds outstanding currently stands at just under $125 billion, up from just under $100 billion a year ago (Graph 51). This now exceeds the total amount of government securities on issue (Commonwealth and state combined). In addition to domestic issues, Australian entities' offshore issuance has been strong. In the September quarter, Australian entities issued $16 billion of bonds offshore, around 60 per cent more than issued in the domestic market. Banks continue to account for a large proportion (around two-thirds) of the funds raised offshore (Graph 52). These primary raisings have been in a range of foreign currencies, but have mostly been swapped back to Australian dollars. As was the case for the US share market, the recovery in Australian share prices from their post-September 11 lows began to peter out by March (Graph 53). Since then Australian share prices have fallen by 12 per cent. While this is a significant fall, it is only about half that in US share prices over the same period. The fall has come in two broad phases, both of which mirrored developments in US markets. The first extended through to end July. During this period, share prices were undermined by growing concerns about US corporate governance arrangements. Markets staged a sharp, but brief, recovery in August and early September, but by early October had fallen to new lows. This second phase of falling share prices seemed to owe more to concerns that the world economic recovery may be stalling. Overall, the Australian share market has continued to exhibit a good deal more stability than its US counterpart. This is borne out by the pattern of daily price movements. While most daily movements in the ASX 200 were no greater than 1 per cent, the S&P 500 moved by more than this amount on around 70 per cent of trading days during the past three months (Graph 54). Similarly, while options prices suggest that expectations of future share price volatility have risen steadily over the past six months, the Australian market is expected to be considerably less volatile than markets overseas (Graph 55). In the US, expected volatility in recent months has reached levels last seen during the Russian debt default crisis and above the level seen in September 2001. Despite the overall stability of the Australian market over the past three months, share prices of insurance companies have fallen noticeably, dropping 7 per cent to be 37 per cent below their June 2001 peak (Graph 56). Much of the fall reflects concerns about the capitalisation of some insurers' offshore operations, with the weakness in offshore equity markets substantially reducing the value of some insurers' investment portfolios. Banks' share prices have fallen by more than the overall market decline since end July. While banks had previously proved resilient to general equity market weakness, a global sell-off of the banking sector (reflecting concerns that weaker economic growth will impact on the credit quality of banks' assets) has put downward pressure on their share prices. In addition, the market is pricing in some expectation that credit growth will slow (as housing market activity abates) and that this will reduce the banks' future earnings growth. Falls in the share prices of industrial companies also made a substantial contribution to the decline in the overall index. The weakness in the industrial sector reflects, in particular, the impact of the drought on agriculture suppliers and higher oil prices on transport companies. In August and September most of the largest listed companies in Australia reported their results for the six months to June this year. In aggregate, reported earnings of the top 100 companies dropped sharply in the first half of 2002 (Table 18), but this fall was more than accounted for by News Corporation's large loss from its investment write-downs. Excluding media companies, earnings rose 13 per cent compared to the first half of 2001. This is a strong performance by international standards. The transport and commercial services sectors performed particularly well. Note: Includes the 92 companies that have reported earnings for the first half of 2002. Overseas-incorporated companies (Telecom NZ, James Hardie Industries and Lion Nathan Limited) are excluded. Equity market analysts remain optimistic, expecting further strong profit growth. Although they have revised down their expectations of industrial companies' earnings for the coming 12 months, current expectations are for an 18 per cent rise in full year earnings for 2002, followed by a further 13 per cent rise in 2003. Over the past three months, Australia's ‘as reported’ P/E ratio has declined slightly to 27 but is well above mid 2001 levels and almost as high as recent readings for the US P/E ratio. However, all of this year's net rise in the Australian ratio is attributable to News Corporation. Excluding News Corporation, the ratio is 17 – the lowest level seen since 1996 (Graph 57). At the beginning of October domestic share price indices were re-weighted to move from individual company weights based on total market capitalisation to weights based on free float. Free float is that part of market capitalisation available for purchase after excluding holdings by the government and strategic shareholdings of more than 5 per cent. The re-weighting brings the Australian indices into line with the calculation of indices overseas. International indices published by Morgan Stanley Capital International moved to this weighting method last May. As a result of the changed weights, the measured market capitalisation of the ASX 200 fell by around 7 per cent. Most of this fall occurred in the media industry (a large part of the consumer discretionary sector), reflecting large family holdings in News Corporation (Table 19). Financials now account for 45 per cent of the index. There was an increase in turnover prior to the index change, as funds managers rebalanced their portfolios, with the increase most pronounced for banks and media. Equity raisings were solid during the September quarter, with $4 billion raised (Graph 58). Most of the raisings were through placements and rights issues, with initial public offerings amounting to just $0.6 billion. Buybacks remained moderate and significantly below the levels seen in 1999 and 2000. Growth in margin lending for equities and managed funds slowed considerably in the September quarter after very strong growth in the preceding six months (Table 20). While the number of clients grew, the average loan size fell. In comparison, margin debt in the US fell 11 per cent over the same period. Consistent with the increase in volatility in the Australian market, margin calls were almost double the level seen in the previous quarter but remain a third below their level in the September quarter last year. In line with the overall fall in the share market, the value of security underlying margin debt fell, resulting in a slight increase in the average gearing of borrowers. Movements in Australian bond yields on a day-to-day basis are heavily influenced by movements in US bond yields (Graph B1). Moreover, a significant share of the daily movement in Australian yields tends to occur overnight. Since June, for example, the absolute daily change in Australian 10-year bond yields has averaged almost 6 basis points during the overnight trading session. The comparable figure for the Australian day session is less than 3 basis points. And even movements during the Australian day often track very closely movements in US Treasuries traded in Tokyo. Despite this close correlation, the spread between Australian and US yields has widened considerably over the course of 2002. At one point in October it reached almost 200 basis points, after having been around 90 basis points at the beginning of the year (Graph B2). In early November the spread narrowed somewhat to around 165 basis points. The widening of the spread can be attributed largely to the difference in the growth outlook for the two countries. This is suggested by movements in real bond yields obtained from 10-year indexed government bonds (Graph B3). In the United States real yields fell by around 1 percentage point between June and early October, to a low of just 2 per cent. This fall can be presumed to reflect one or both of two factors: concerns about future growth prospects for the US economy, which would imply that market interest rates would be low for an extended period; and/or a reduced tolerance for risk, with greater appetite by investors to hold ‘riskless’ assets. While real rates rose in the second half of October on the back of relatively positive corporate earnings announcements, they remain significantly below their levels earlier in the year. In contrast, real yields in Australia have been much more stable. The resulting widening of the real yield spread accounts for all the widening of the nominal spread. Interestingly, most of the movement in the Australian–US spread has occurred in the overnight trading session, rather than in the Australian day (Table B1). This reflects the recent tendency for Australian bond yields to fall by less than US yields on days that US yields decline. This is consistent with the stronger economic indicators in Australia relative to those in the United States. In net terms, the spread has moved little during the Australian trading day since the end of June. The previous occasion on which the yield spread increased by a significant amount for a sustained period was in the first half of 1994. In contrast to the current episode, on that occasion it was concern about the inflation outlook in Australia that was largely responsible; during the course of 1994 the expected inflation rate (calculated from bond yields) rose by almost 2 percentage points to nearly 5 per cent. In the current episode, expected inflation has, in net terms, changed little (Graph B4); while it declined from end June to early September, it has subsequently increased back to around 2½ per cent, a level consistent with the Bank's medium-term inflation target. Financial conditions remain expansionary, with real short-term interest rates below their recent averages. Over the past quarter, some fixed-term borrowing rates charged by intermediaries have declined a little, reflecting falls in nominal bond yields. Other indicators also suggest that financial conditions are supportive of growth; credit growth remains strong, and the real exchange rate is low relative to its historical average. With bond yields falling, however, the yield curve now shows less upward slope which could be interpreted as indicating a reduction in the degree of monetary expansion. The effect of lower yields is also being partially offset by the further declines in equity prices and increases in some corporate yield spreads over the past few months. The cash rate has remained at 4.75 per cent since June, after increasing following the May and June meetings of the Board. Estimates of the real cash rate based on alternative measures of inflation expectations remain lower than recent averages, though there are some noticeable differences among the alternative measures (Table 21). The measure that uses underlying inflation as a proxy for inflation expectations remains around ¾ percentage point below its average over the period since 1997, a period during which inflation has averaged around 2¼ per cent and output growth around 4 per cent. The measure based on inflation expectations derived from the bond market is only a little below its average over this period, reflecting a decline in bond-market inflation expectations over the September quarter. Note: Current observations use the latest cash rate, the September quarter 2002 weighted median inflation rate, and average bond market and consumer inflation expectations over the September quarter 2002. Lending rates of intermediaries are also below recent historical averages, in both real and nominal terms (Graph 59). Over the past quarter, fixed interest rates charged by intermediaries have fallen slightly, reflecting the decline in longer-term yields. However, the effect of this on aggregate business capital raising activity should be limited, since the stimulatory impact is offset to some extent by the further falls in equity prices, and by increased yield spreads on lower-rated corporate bonds. Further details on these developments are reported in the chapter on ‘Domestic Financial Markets’. The slope of the yield curve, as measured by the difference between long-term and short-term interest rates, provides an alternative indication of the stance of policy. A positive-sloping yield curve (or, more correctly, a yield curve with a larger than average positive slope) is normally interpreted as evidence that policy is expansionary since it implies that short-term interest rates are below the level at which they are expected to be on average over the medium term. Currently the slope of Australia's yield curve remains positive, but it has flattened out over the past few months, reflecting lower bond yields (Graph 60). Total credit grew at an annualised rate of 14.1 per cent over the six months to September, an increased pace from the annualised rate of 8¾ per cent over the six months to March (Graph 61). The pick-up in credit growth has reflected some further acceleration in household credit as well as a marked pick-up in the rate of business borrowing. The strength in household credit primarily reflects the rapid growth in housing loans, particularly to investors. Business borrowing grew at an annualised rate of 8 per cent over the six months to September, a considerable increase from the ¾ per cent annualised growth over the preceding six-month period. As detailed in the chapter on ‘Domestic Economic Activity’, this increased fund-raising by businesses – which is also evident in direct market raisings – is consistent with the favourable investment outlook. Growth in the monetary aggregates has moderated over the six months to September, with broad money increasing at an annualised rate of 9 per cent, compared with 11 per cent over the six months to March. Growth in business deposits had been particularly strong, but in the recent period business deposits have barely grown, consistent with the upturn in investment spending. In year-ended terms, growth in money and credit are now more aligned, after a brief period where money growth had outstripped credit (Graph 62). As discussed in the chapter on ‘International and Foreign Exchange Markets’, the Australian dollar has appreciated modestly in recent months on a trade-weighted basis. The real trade-weighted exchange rate, which adjusts for inflation in Australia and across our trading partners, has risen by around 6 per cent over the past year. Notwithstanding the recent rise, the current level of the real exchange rate is around 8 per cent below its 1990s' average (Graph 63). The Consumer Price Index (CPI) increased by 0.7 per cent in the September quarter and by 3.2 per cent over the year (Table 22, Graph 64). Measures of underlying inflation increased by between ½ and ¾ per cent in the quarter and range between 2½ and 3¼ per cent in year-ended terms (Graph 65). The statistical measures based on the quarterly distribution of price changes – the weighted median and the trimmed mean – suggest that underlying inflation is 2½ – 2¾ per cent. In contrast, the exclusion-based measures continue to report underlying inflation towards the top end of this range and, in year-ended terms, were similar to CPI inflation in the September quarter. Measures of the weighted median and trimmed mean calculated from the year-ended rather than the quarterly distribution of price changes, are also running closer to 3 per cent. The divergence between the different measures is arising currently because large quarterly price movements have occurred across a broad range of different items over the past year and are removed from the quarterly-based measure. As it is more likely that these movements reflect a series of one-off factors rather than ongoing inflationary pressures, the Bank's assessment of underlying inflation is that it is currently running at around 2¾ per cent. (a) For more information on these measures see ‘Box D: Underlying Inflation’ in the May 2002 Statement on Monetary Policy . The largest contributor to the rise in the CPI in the September quarter was an increase of 12 per cent in vegetable prices, partly reflecting the drought. The effects of the drought were also apparent in the 3 per cent fall in beef and veal prices, but are not yet evident for other items such as fruit and bread. In year-ended terms, the most significant contribution to CPI inflation was made by holiday travel & accommodation prices, which increased by almost 13 per cent, owing mainly to increases in the cost of overseas holiday travel, but also reflecting a number of one-off factors such as the introduction of the Ansett and insurance airfare levies at the end of last year. House purchase costs have increased steadily and are now 3¾ per cent higher than a year ago, driven by ongoing strength in the housing market and the effect of the removal of the Commonwealth Additional Grant for first-home buyers. Reflecting large increases in reinsurance premiums, withdrawals of low cost providers from the industry, increased payouts and weaker investment returns, the price of insurance services has also increased steadily to be 4½ per cent higher over the year. This is likely to have contributed to above-average increases in the prices for child care and sports participation in the September quarter. There were a number of price falls in the quarter, including motor vehicle prices which fell by 0.9 per cent in the quarter, to be around ½ per cent lower over the year. Fuel prices also fell by 1.2 per cent in the quarter, though they are 2.3 per cent higher over the year. Pharmaceutical prices fell in the quarter reflecting the seasonal effect of the Pharmaceutical Benefits Scheme, though health costs more generally are around 6 per cent higher over the year, owing mainly to higher private health insurance premiums. Audio, visual & computing prices continued to fall, declining by 1.8 per cent in the September quarter to be 4.2 per cent lower than a year earlier. Tradable goods prices were flat in the September quarter, reflecting the influence of subdued world price movements, but were 2¼ per cent higher over the year. In contrast, the strength in the domestic economy has contributed to a general rise in non-tradables inflation over the past year or so, to 4 per cent over the year to the September quarter. Upstream inflationary pressures remain moderate. In the September quarter, final-stage producer prices increased by 0.5 per cent, and were 1.4 per cent higher over the year (Table 23, Graph 66). The largest contributions to the quarterly rise were from increases in building materials prices and the cost of utilities. These price rises were partially offset by falls in the price of meat products. Petrol prices also fell in the quarter, though they have had little effect on final stage producer price inflation over the year. Price increases were less prevalent at the intermediate and preliminary stages of production in the September quarter. Consistent with the pattern for much of this year, domestic price increases in the quarter were small. Rises in iron and steel prices and the cost of technical services, in particular of consultant engineers, were partly offset by significant falls in prices for beef and dairy cattle. Reflecting the depreciation of the exchange rate in the September quarter, the price of imported components, at both the preliminary and intermediate stages of production, rose after falling for much of the past year. The weak world economy appears to be putting continued downward pressure on world prices of a number of manufactured goods, such as industrial machinery and electronic goods. The various business surveys suggest that upstream cost pressures remain muted. Purchase costs as reported in the NAB survey are only picking up slightly, while according to the ACCI-Westpac survey of manufacturers, the net balance of firms experiencing cost increases remains on a downward trend. Most labour cost indicators continue to suggest that wage pressures are subdued though, consistent with the on-going recovery in the labour market, there are some signs that the wage cycle may be near a trough. In the June quarter, growth in the wage cost index (WCI) for total pay edged up in quarterly seasonally adjusted terms, though it remained 3.1 per cent higher than a year earlier (Table 24). At the industry level, the WCI continued to record the fastest annual wage growth in the electricity, gas and water industry (4.0 per cent), and the slowest wage growth in transport and storage (2.5 per cent). Wage growth in property and business services experienced the largest deceleration, falling from a peak of 4.8 per cent in early 2001 to 3.0 per cent, reflecting the weaker labour market outcomes in that sector. Business surveys suggest a leveling out in wage pressures. Total labour costs, as reported in the NAB survey, appear to have reached a trough in year-ended terms in the March quarter although firms expect labour cost growth to pick up only slightly in the near term. The NAB survey reports that businesses are having increasing difficulty attracting suitable labour. The ACCI-Westpac survey tells a similar story, despite an easing in the difficulty of finding suitable labour in the September quarter. Data from the Department of Employment and Workplace Relations on enterprise bargaining agreements continue to indicate an easing in the pace of wage growth. New federal enterprise agreements ratified in the June quarter provided an average annualised wage increase of 3.6 per cent, down from around 4 per cent a year earlier. The latest reading, however, was weighed down by an unusually high representation of retail sector agreements, which tend to incorporate relatively low wage increases. As the representation of new agreements shifts towards industries which traditionally pay higher wage increases, such as construction and manufacturing, this trend is likely to be reversed. Wage increases for the stock of existing agreements, which have been on a gradual upward trend over the past couple of years, remained at 3.8 per cent in the June quarter. According to the latest Mercer Quarterly Salary Review, executives' base salaries rose by 4.5 per cent over the year to September, at the lower end of the 4½–5 per cent range that has existed over much of the past four years. Differences between the various wage bill measures persist. Average weekly ordinary-time earnings of full-time adults (AWOTE) grew by 0.8 per cent in the June quarter, to be 5.2 per cent higher over the year. (For a discussion of the interpretation of wage-bill measures, refer to Box B of the August 2002 Statement on Monetary Policy .) The national accounts measure of compensation per employee grew by 1.6 per cent in the June quarter to be 3.2 per cent higher over the year; on a per hour basis, compensation grew by 1.3 per cent in the June quarter to be 4.3 per cent higher over the year. The faster pace over the past year on a per hour basis reflects a decline in average hours worked. Unit labour costs based on compensation per hour worked increased by 2.8 per cent over the first half of this year, but remain subdued in year-ended terms at 1.3 per cent. Most measures of inflation expectations remain contained. The latest NAB quarterly survey reported that businesses expect inflation in both retail and other final product prices to be 0.4 per cent in the December quarter (Graph 67). Similarly subdued expectations were reported in the ACCI-Westpac survey of manufacturers. In contrast, consumer inflation expectations, as measured by the Melbourne Institute survey, rose in October, although this measure of expectations can be volatile. Longer-term inflation expectations of investors, as measured by the difference between 10-year bond yields and indexed bonds, have fluctuated between 2 and 2½ per cent in recent months. Financial market economists surveyed by the Bank have revised up their median inflation forecast for the year to June 2003, from 2.4 per cent to 2.6 per cent, following the release of the September quarter CPI data (Table 25). The forecast for median inflation over the following year remains at 2.5 per cent. The median inflation forecast of trade union officials, as surveyed by the Australian Centre for Industrial Relations Research and Training (ACIRRT), has been revised up by ½ percentage point to 3.5 per cent for the year to June 2003, while the median forecast for June 2004 has been revised down marginally to 3.4 per cent. The CPI outcome for the September quarter was generally consistent with the short-term outlook presented in previous Statements, which suggested that underlying inflation was likely to remain close to the target mid-point during the second half of 2002. As discussed above, there continues to be divergence among the different underlying measures, though the Bank's assessment is that underlying inflation was around 2¾ per cent over the year to the September quarter, having fallen from a peak of around 3¼ per cent at the end of 2001. Cost and price indicators provide little evidence of any generalised upward pressure on inflation in the short term. Domestic producer prices have moderated over the past 18 months and business surveys continue to report only moderate upstream price rises, although increases in some non-wage business costs, particularly utility and insurance charges, are putting some pressure on business margins. Wage and labour costs also remain contained, but the improvement in the labour market over the course of the past year suggests that these are unlikely to ease any further. Looking further ahead, underlying inflation is expected to remain within the target range. Forecasts presented in previous Statements had envisaged that upward pressure on inflation would develop, with inflation rising gradually to the top of the target range over the next year or so. This assessment embodied expectations of a gradually improving global situation in which the global recovery would become more firmly established. In that scenario, it was expected that the Australian economy would continue to expand at close to its recent pace, which would lead to gradually increasing capacity utilisation and upward pressures on wages and prices. But with the global situation now looking less favourable, some easing in the pace of growth in the Australian economy appears likely in the coming year from the strong pace maintained over the recent period. That being the case, inflation pressures in Australia are expected to be slightly weaker than was embodied in previous forecasts, and hence underlying inflation is now expected to remain close to its recent level over the next year or so. The drought is likely to have a noticeable influence on CPI outcomes in the short term. The principal effect of the drought on inflation will be an increase in food prices, which will keep CPI inflation around the top end of the Bank's target range over the next quarter or so. Underlying inflation, however, is likely to be less affected, and indeed there may be a small dampening impact as a result of the adverse effect of the drought on output growth. Over a longer period, as these temporary effects drop out of the calculation, the rate of CPI inflation is expected to converge towards the underlying measures, and therefore to return to a rate that is within the target range. There are, as always, a number of sources of risk around this forecast. If the global recovery were to regain momentum, somewhat stronger growth in the Australian economy could be expected and hence capacity pressures could well emerge within the forecast period. This would see a somewhat higher inflation outcome than the one envisaged in the current forecast. Reinforcing this risk is that, as noted above, CPI inflation is likely to remain around 3 per cent in the near term and, given relatively firm demand conditions, this could become embodied in ongoing inflation expectations of wage and price setters. On the other hand, a slower-than-expected world recovery would have adverse effects on Australian growth, as well as generating additional downward pressure on world prices, which would result in inflation declining toward the lower end of the target range. At present these risks appear to be evenly balanced.
2019-04-25T23:41:42Z
https://rba.gov.au/publications/bulletin/2002/nov/1.html
The following publications were produced by former SCSEEC entities, the Ministerial Council on Education, Early Childhood Development and Youth Affairs (MCEECDYA), for publications issued from 2009 to 2012; or, for publications issued prior to mid-2009, by the Ministerial Council for Education, Employment, Training and Youth Affairs (MCEETYA). The archived publications presented here include nationally agreed guidelines and/or documents in the areas of: school education, early childhood development, employment, training (including teacher training), and youth affairs. At the 13th MCEETYA meeting in July 2002, MCEETYA Ministers endorsed a new Ministerial Declaration on Adult Community Education (ACE). The Declaration emphasises achieving community capacity building through community ownership, and on the importance of the ACE sector as a pathway to further education and training for “second chance” learners. The goals and strategies demonstrate Ministers’ commitment to the future development of adult community education in Australia, and firmly places adult community education as a significant contributor within the continuum of education and training provision in Australia. This declaration reaffirms the Ministerial Declaration on Adult Community Education (ACE), agreed to by MCEETYA Ministers at their July 2002 meeting. This revision extends acknowledgement of the value of ACE to its potential to respond to changed industrial, demographic and technological circumstances, and encourages a collaborative approach to ACE, to allow the sector to make a greater contribution to supporting the Council of Australian Governments’ (COAG) productivity agenda for skills and workforce development. The declaration also identifies ACE as a key player in the response to the Australian Government’s Social Inclusion policy agenda, and acknowledges ACE as a significant contributor to education and training provision. It demonstrates the commitment of Commonwealth, State and Territory Ministers to work collaboratively to maximise positive outcomes from this sector in Australia. In November 2010, MCEECDYA endorsed the National Information Agreement on Early Childhood Education and Care (NIA ECEC), developed in consultation with the Australian Government, States and Territories, as well as key data agencies. This agreement was made to facilitate and improve the collection, sharing and reporting of early childhood education and care information. The NIA ECEC is an important step in national efforts to improve the quality and reliability of early childhood education and care data. This framework was commissioned by MCEETYA and prepared by the MCEETYA Gender Equity Taskforce and Reference Group. MCEETYA Ministers endorsed the framework at their fifth meeting, in Brisbane in 1996. The framework proposes ten principles for action, underpinning a series of five strategic directions, to be taken up by States and Territories’ schools and systems, education practitioners, parents and school communities. It builds on the work already undertaken through the companion documents, The National Policy for the Education of Girls in Australian Schools and the National Action Plan for the Education of Girls 1993–97, and draws upon growing understandings about the construction of gender and its implications for policy and practice, as well as developments in education which examine the differences in the experiences and outcomes of schooling for both girls and boys, and for different groups of girls and boys. The framework is based on action in the areas of: understanding the process of construction of gender; curriculum, teaching and learning; violence and school culture; post-school pathways, and supporting change. The National Action Plan for the Education of Girls 1993–97 was endorsed by Ministers meeting as the Australian Education Council, at their 68th meeting, in Auckland in 1992. The National Action Plan is a guide for schools and systems, education practitioners, parents and school communities to mainstream policy making, in order to achieve the ongoing objectives of the National Action Plan, and as a practical manual to assist educators to achieve these objectives in their day-to-day work. The National Action Plan was followed by the report, Gender Equity: A Framework for Australian Schools. Please note, this title is out of print, hard copies may be available at major State and Territory libraries. The National Policy for the Education of Girls in Australian Schools was endorsed by Ministers meeting as the Australian Education Council, at their 54th meeting, in Hervey Bay, Queensland, 1987, and was subsequently endorsed by the National Catholic Education Commission and the National Council of Independent Schools Associations. The National Policy was augmented in 1993 by the National Action Plan for the Education of Girls in Australian Schools 1993–97, which itself was succeeded in 1996, with Ministers’ endorsement of the report, Gender Equity: A Framework for Action on Gender Equity in Schooling. Please note, this title is out of print, hard copies may be available at major State and Territory libraries. This report, prepared by researchers from Phillips Curran and KPA Consulting, was commissioned by MCEETYA Ministers to analyse Commonwealth decisions in relation to higher education, with an emphasis on issues in areas of concern to State and Territory governments. Giving Credit Where Credit is Due consolidates findings from the study, published in 2005 in the draft reports, Independent Study of the Higher Education Review: Stage 2 Report, Volumes 1 and 2, and it makes recommendations to improve credit transfer. This study involved extensive consultation with the VET and higher education sectors, identified gaps in practice, and made recommendations for initiatives to drive further improvement. The study produced three reports, one of which, Independent Study of the Higher Education Review: Stage 2 Report, was published in two volumes. Volume 1 analyses the status of Australian higher education in 1993, and Volume 2 analyses the decisions announced in the ministerial statement, Our Universities Backing Australia’s Future. The Good Practice Principles for Credit Transfer and Articulation from VET to Higher Education were adopted by MCEETYA Ministers on 13 May 2005, and constitute a draft version of the principles later approved by MCEETYA in 2006. Credit transfer and articulation arrangements increase opportunities for students with prior VET sector experience and qualifications to access higher education by facilitating student mobility between institutions and sectors. These principles apply nationally to all credit transfer and articulation arrangements by both recognised VET and Higher Education Providers. They set some broad goals to encourage measurable improvement over time, and they provide a benchmark against which progress can be assessed and reported. This report, prepared by researchers from Phillips Curran and KPA Consulting, was commissioned by MCEETYA Ministers to analyse Commonwealth decisions in relation to higher education, with an emphasis on issues in areas of concern to State and Territory governments. This study involved extensive consultation with the VET and higher education sectors, identified gaps in practice, and made recommendations for initiatives to drive further improvement. The study produced three reports, one of which, Independent Study of the Higher Education Review: Stage 2 Report, was published in two volumes. Volume 1 analyses the status of Australian higher education in 1993, and Volume 2 analyses the decisions announced in the ministerial statement, Our Universities Backing Australia’s Future. The other report from the study, Giving Credit Where Credit is Due – Final Report, published in 2006, consolidates findings from the study, and makes recommendations to improve credit transfer. The National Protocols for Higher Education Approval Processes were recommended by the Joint Committee on Higher Education (JCHE) and approved by MCEETYA Ministers on 31 March 2000. In December 2007, these were replaced by the Revised National Protocols for Higher Education Approval Processes. A consultation process was held in April 2006, which informed the Joint Committee on Higher Education’s (JCHE’s) recommendations for changes to the National Protocols that were considered by MCEETYA Ministers in Brisbane in July 2006. Submissions of feedback that were received during the consultation process are included. MCEETYA Ministers approved the Principles for Good Practice Information on Credit Transfer and Articulation from Vocational Training and Education to Higher Education at their 20th meeting in Brisbane in July 2006, following consultation with stakeholders on draft principles approved by MCEETYA in 2005 (Good Practice Principles for Credit Transfer and Articulation from VET to Higher Education). These final principles respond to the issues raised in those consultations. MCEETYA Ministers approved the Revised National Protocols for Higher Education Approval Processes at their meeting in Brisbane in July 2006, with some clarifications that were approved in October 2007. These revised protocols commenced operation in December 2007. The Australian Information and Communications Technology in Education Committee (AICTEC) developed this action plan, which was endorsed by MCEETYA Ministers in May 2005. It identifies priorities for action and provides a common agenda on which stakeholders, including governments, education and training providers, and the private sector, can work together to fulfil the vision outlined by Ministers in the MCEETYA Joint Statement on Education and Training in the Information Economy . This report forms the overarching statement for the Learning in an Online World series of policy, strategy, frameworks and action plan documents, prepared by the MCEETYA ICT in Schools Taskforce, to support jurisdictions and schools in meeting the challenge of all schools confidently using ICT in their everyday practices to improve learning, teaching and administration. Learning in an Online World: Contemporary learning describes the environment, articulates the national policy framework and identifies significant actions required. This report is part of the Learning in an Online World series of documents prepared by the MCEETYA ICT in Schools Taskforce. It outlines ways in which the highly technological and information-rich world shapes student expectations and processes for learning, and discusses the innovative and effective uses of ICT that empower teachers to personalise student learning. MCEETYA Ministers approved the publication of this report at their meeting in Brisbane in July 2006. This document, part of the Learning in an Online World series prepared by the MCEETYA ICT in Schools Taskforce, sets out the vision and strategy of Australian and New Zealand Education Ministers for continued provision of online curriculum content beyond 2005. This report is part of the Learning in an Online World series of documents prepared by the MCEETYA ICT in Schools Taskforce. It highlights issues in the development and application of school-based and systemic leadership to support the seamless integration of ICT in 21st century learning environments. Learning in an Online World: Leadership Strategy was published in 2006, following approval by MCEETYA Ministers at their meeting in Brisbane in July 2006. This report is part of the Learning in an Online World series of documents prepared by the MCEETYA ICT in Schools Taskforce. It articulates national priorities for action by schools and associated educational organisations. Learning in an Online World: Learning Architecture Framework enables the school sector to share information through an architectural paradigm – a Learning Architecture – that supports teachers, students and administrators to effectively plan, design, deliver, assess and report. This report, part of the Learning in an Online World series of documents prepared by the MCEETYA ICT in Schools Taskforce, aims to guide strategic decision-making in jurisdictions and schools around the planning of learning spaces in schools, particularly environments shaped by ICT. Learning in an Online World: Learning Spaces Framework was published in April 2008 following MCEETYA Ministers’ out-of-session approval. MCEETYA Ministers endorsed the National Bandwidth Action Plan at their July 2003 meeting, as the basis for the development of a National Implementation Plan, to be prepared by the MCEETYA ICT in Schools Taskforce. The National Bandwidth Action Plan provides a framework that addresses the needs of all Australian schools to improve their access to broadband services. The National Bandwidth Implementation Plan 2004–05 was prepared by the MCEETYA ICT in Schools Taskforce. It provides the detail critical to realising the intent of the National Bandwidth Action Plan, including identification of opportunities for collaborative work with the other education sectors and Australian Government agencies. This report, part of the Learning in an Online World series of documents prepared by the MCEETYA ICT in Schools Taskforce, focuses on ICT as an enabler of good pedagogy. It highlights issues for consideration when planning for integration of ICT in the learning environment. The strategy notes the considered use of ICT can transform the teacher’s role, creating new learning environments, and teacher pedagogies can determine the extent to which the possibilities offered by technology are realised in education settings. Learning in an online world: Research Strategy articulates national priorities for action by schools and associated educational organisations. It notes that innovative applications of technology will enable teachers and researchers to collaborate on advances in learning, and ensure that schooling sector research is easily accessible to teachers, parents and the community. The MCEETYA Joint Statement on Education and Training in the Information Economy was co-written with the Australian Information and Communications Technology in Education Committee (AICTEC). The statement provides an outline of a future nationally collaborative work plan around ICT in education and training. This was published in 2005 following MCEETYA Ministers’ out-of-session approval. The National Assessment Program ICT Literacy Sample Assessment for Year 6 and Year 10 students was conducted in October 2005, and results were released in December 2007. The report was co-authored by a MCEETYA Review Committee and an Australian Council for Educational Research (ACER) project team, and it presents findings from the first national assessment of the ICT literacy of Australian school students in years 6 and 10. This report, project work commissioned by MCEETYA and authored by a steering committee led by Felix Hudson and Kathryn Moyle, of the South Australian Department of Education and Children’s Services (DECS), identifies and reviews the technical documentation associated with some open source software that is commonly used in schools. It investigates the question: What place does open source software have in Australian and New Zealand schools and school jurisdictions’ ICT portfolios? This report comprises three inter-related papers: (i) a review of the technical documentation accompanying open source software; (ii) a research paper about the total cost of ownership and open source software in schools, and (iii) a paper discussing a trial of open source software conducted at Grant High School, in South Australia. This paper provides a review of the technical documentation accompanying open source software. This report, project work commissioned by MCEETYA and authored by a steering committee led by Kathryn Moyle, of the South Australian Department of Education and Children’s Services (DECS), identifies and reviews the technical documentation associated with some open source software that is commonly used in schools. It investigates the question: What place does open source software have in Australian and New Zealand schools and school jurisdictions’ ICT portfolios? This report comprises three inter-related papers: (i) a review of the technical documentation accompanying open source software; (ii) a research paper about the total cost of ownership and open source software in schools, and (iii) a paper discussing a trial of open source software conducted at Grant High School, in South Australia. This paper provides an analysis of the total cost of ownership and open source software in schools. This report, project work commissioned by MCEETYA and authored by a steering committee led by Dr Kathryn Moyle, of the South Australian Department of Education and Children’s Services (DECS) and Peter Ruwoldt, of Grant High School (South Australia) identifies and reviews the technical documentation associated with some open source software that is commonly used in schools. It investigates the question: What place does open source software have in Australian and New Zealand schools and school jurisdictions’ ICT portfolios? This report comprises three inter-related papers: (i) a review of the technical documentation accompanying open source software; (ii) a research paper about the total cost of ownership and open source software in schools, and (iii) a paper discussing a trial of open source software conducted at Grant High School, in South Australia. This paper discusses a trial of open source software conducted at Grant High School, in South Australia. As part of the National Asian Languages and Studies in Australian Schools (NALSAS) Taskforce’s terms of reference, MCEETYA asked it to develop a detailed strategic plan for Phase 2 of the implementation of the NALSAS Strategy (1999–2002), to be endorsed by MCEETYA, based on recommendations from the report, Asian Languages and Australia’s Economic Future. The plan for Phase 2 of the Strategy addresses these issues by focusing on the four strategic areas of: (i) curriculum delivery; (ii) teacher quality and supply; (iii) strategic alliances; and (iv) outcomes and accountability. This progress report was prepared by the MCEETYA National Asian Languages and Studies in Australian Schools (NALSAS) Taskforce, and endorsed for print publication by MCEETYA Ministers at their tenth meeting in Adelaide, April 1999. The report highlights the significant activities and achievements that have occurred during the first four years of the NALSAS strategy, and notes the main activities the research partners are engaging in as a result of NALSAS funding and the collaborative achievements of the strategy. Please note, as this report is now out of print, this PDF is a scanned copy of the print version. This was one of several reports commissioned by MCEETYA’s Performance Measurement and Reporting Taskforce (PMRT). The report provides an overview of the national and international benefits of Australia participating in the international student achievement studies, Trends in International Mathematics and Science Study (TIMSS) and the Programme for International Student Achievement (PISA). The report does not necessarily represent the views of either MCEETYA Ministers or PMRT members. This was one of several reports commissioned by MCEETYA’s Performance Measurement and Reporting Taskforce (PMRT). The report provides an overview of the benefits of Australian students participating in national assessments of student achievement. The report does not necessarily represent the views of either MCEETYA Ministers or PMRT members. This report, by Nigel Smart, Gerald Burke and Phillip McKenzie (Smart Consulting & Research and Monash University – ACER Centre for the Economics of Education and Training), was one of several reports commissioned by MCEETYA’s Performance Measurement and Reporting Taskforce’s (PMRT’s) predecessor, the National Education Performance Monitoring Taskforce (NEPMT). The report is a consolidated version of an earlier report, and provides a framework to develop nationally comparable measures of student participation, transition, retention and completion/attainment. The report does not necessarily represent the views of either MCEETYA Ministers or National Education Performance Monitoring Taskforce (NEPMT) members. This report, prepared by Murray Print and John Hughes (Centre for Research and Teaching in Civics, University of Sydney), was one of several reports commissioned by MCEETYA’s Performance Measurement and Reporting Taskforce’s (PMRT’s) predecessor, the National Education Performance Monitoring Taskforce (NEPMT). The report presents recommendations for Key Performance Measures (KPMs) in the assessment of school Civics and Citizenship Education. The report does not necessarily represent the views of either MCEETYA Ministers or National Education Performance Monitoring Taskforce (NEPMT) members. This report, by John Ainley and the Australian Council for Educational Research (ACER), was one of several reports commissioned by MCEETYA’s Performance Measurement and Reporting Taskforce’s (PMRT’s) predecessor, the National Education Performance Monitoring Taskforce (NEPMT). The report was commissioned to develop a common definition of language background, culture and ethnicity to be used in nationally comparable reporting of the outcomes of students, within the context of the National Goals for Schooling in the Twenty-first Century (the Adelaide Declaration) . The report does not necessarily represent the views of either MCEETYA Ministers or National Education Performance Monitoring Taskforce (NEPMT) members. This report, by Professor Peter Cuttance and Shirley Stokes, was one of several reports commissioned by MCEETYA’s Performance Measurement and Reporting Taskforce’s (PMRT’s) predecessor, the National Education Performance Monitoring Taskforce (NEPMT). The report was commissioned to assist in the development of key performance measures for information and communication technology, and does not necessarily represent the views of either MCEETYA Ministers or National Education Performance Monitoring Taskforce (NEPMT) members. The National Assessment Program – Literacy and Numeracy (NAPLAN) reports the full range of student achievement against a common scale, and uses a common set of tests to resolve the technical difficulties associated with equating State and Territory based tests. The first NAPLAN tests were conducted in May 2008 for all Years 3, 5, 7 and 9 students in government and non-government schools. The report includes results for Indigenous students, students with a language background other than English, and students living in metropolitan, country and remote areas. The comparative performance of girls and boys is also reported, as well as a breakdown of student results by parental occupation and parental education. This commissioned report, prepared by Colmar Brunton Research on behalf of MCEETYA’s Performance Measurement and Reporting Taskforce, evaluates the 2008 NAPLAN student report, to determine the extent to which parents of students assessed in Years 3, 5, 7, and 9 understand the information communicated by the NAPLAN individual student reports. The report presents the findings of this research. The National Assessment Program – Literacy and Numeracy (NAPLAN) reports the full range of student achievement against a common scale, and uses a common set of tests to resolve the technical difficulties associated with equating State and Territory based tests. The NAPLAN tests were conducted in May 2009 for all Years 3, 5, 7 and 9 students in government and non-government schools. For the first time, the NAPLAN tests were equated, so the 2009 results can be compared with those for 2008. This is a revision of the National Assessment Program – Civics and Citizenship Assessment Domain and was developed by the Australian Curriculum Assessment and Reporting Authority (ACARA), in consultation with the 2010 National Assessment Program Civics and Citizenship Review Committee. The assessment framework provides a clear definition of the scope and method of testing for the Civics and Citizenship sample assessment. This report presents the findings from the National Assessment Program – Civics and Citizenship assessment, which was conducted in October 2004 under the auspices of MCEECDYA, and the report prepared by the MCEECDYA Performance Measurement and Reporting Taskforce, in conjunction with a review committee. The assessment measures the civic knowledge and understanding and the citizenship participation skills and civic values of Australian Year 6 and Year 10 students. The information and assessment materials in these documents were designed to assist teachers to gauge their own students’ proficiency in civics and citizenship, and compare students’ results with the national proficiency levels and standards in civics and citizenship at the relevant year level (Year 6 or Year 10). These were published following the National Civics and Citizenship Sample Assessment 2004, which measured the civic knowledge and understanding, the citizenship participation skills and civic values of students. The participating students were from both government and non-government schools. This report, by Nicole Wernert, Eveline Gebhart, Martin Murphy and Wolfram Schultz from the Australian Council for Educational Research (ACER), was one of several reports commissioned by MCEETYA’s Performance Measurement and Reporting Taskforce. It describes the technical aspects of the National Civics and Citizenship Sample Assessment for 2004, and summarises the main activities involved in the data collection, the data collection instruments and the analysis and reporting of the data. This report presents the findings from the 2007 National Assessment Program – Civics and Citizenship, conducted under the auspices of MCEETYA. The MCEETYA Performance Measurement and Reporting Taskforce (PMRT) prepared the report in conjunction with a review committee. It is the second report to be published on Civics and Citizenship in the cycle of three-yearly sample assessments conducted by MCEETYA as part of its National Assessment Program. The assessment measured Year 6 and Year 10 students’ civic knowledge and understanding, their citizenship participation skills and dispositions. A selection of items used in the 2007 National Assessment Program for Civics and Citizenship Year 6 and Year 10 School Assessment were released in 2009, to enable teachers to administer the assessment tasks under similar conditions and to gauge their own students’ proficiency in relation to the national standards. This report, by Nicole Wernert, Eveline Gebhart, Martin Murphy and Wolfram Schultz from the Australian Council for Educational Research (ACER), was one of several reports commissioned by MCEETYA’s Performance Measurement and Reporting Taskforce. It describes the technical aspects of the National Civics and Citizenship Sample Assessment for 2007, and summarises the main activities involved in the data collection, the data collection instruments and the analysis and reporting of the data. This document, prepared by the MCEETYA Performance Measurement and Reporting Taskforce, provides information about the ICT literacy assessment including Education Ministers’ decisions regarding ICT; the definition of ICT literacy; a description of the ICT literacy domain, strands and the progress map; the types of items used in ICT literacy assessment and how the results from the assessments will be reported. This report presents the findings from the first national assessment of the ICT literacy of Australian Years 6 and 10 students, conducted in October 2005 under the auspices of MCEETYA. The MCEETYA Performance Measurement and Reporting Taskforce prepared the report, in conjunction with a review committee. The report provides a single ICT literacy scale against which the achievements of Years 6 and 10 students are reported, and proficiency levels linked to descriptions of student performance. The information and assessment materials in these resources were designed to assist teachers to gauge their own students’ proficiency in Information and Communication Technologies (ICT) literacy. By examining modules from the National Year 6 and Year 10 ICT Literacy Assessment, conducted in 2005, teachers may be able to design similar tasks and to judge their students’ proficiency in relation to the national standards in ICT literacy. This report, by John Ainley, Julian Fraillon, Chris Freeman and Martin Murphy from the Australian Council for Educational Research (ACER), was one of several reports commissioned by MCEETYA’s Performance Measurement and Reporting Taskforce. It describes the technical aspects of the National ICT Literacy Sample Assessment and summarises the main activities involved in the data collection, the data collection instruments and the analysis and reporting of the data. This report presents the findings from the National Assessment Program – ICT literacy assessment, conducted in 2008 under the auspices of MCEECDYA, and prepared by the MCEECDYA Performance Measurement and Reporting Taskforce, in conjunction with a review committee. The report compares the results of Australian school students by State and Territory and student sub-groups, and provides details of their achievement against an ICT literacy scale. It also enables these student achievements to be compared against those from the first national assessment of ICT literacy conducted in 2005. The information and assessment materials in these resources were designed to assist teachers to gauge their own students’ proficiency in Information and Communication Technologies (ICT) literacy. By examining modules from the National Year 6 and Year 10 ICT Literacy Assessment, conducted in 2008, teachers may be able to design similar tasks and to judge their students’ proficiency in relation to the national standards in ICT literacy. This report, prepared by the MCEETYA Performance Measurement and Reporting Taskforce, in conjunction with a steering committee, presents the findings from the first nationally comparable science assessment of Australian Year 6 students, which was conducted in 2003. The report provides the key results from the national sample assessment. It gives a snapshot of student results across the national science literacy scale, and an analysis of various trends across States and Territories and student sub-groups. This assessment represents a new direction in national approaches to reporting on and celebrating the achievements of Australian students and schools. This report, commissioned by MCEETYA and authored by ACER, provides a contextual background to the assessment procedure for the 2003 National Assessment Program for Science. The National Assessment Program – Science Literacy Year 6 is one of a suite of national assessments (with ICT and Civics and Citizenship) conducted with a random sample of students in three-yearly cycles. The MCEETYA Performance Measurement and Reporting Taskforce prepared the report, in conjunction with a review committee. It is the second to be conducted on science literacy, and for the first time nationally, the achievement of students has been compared over time and publicly reported. This report, prepared for MCEETYA by a project team from Educational Assessment Australia and Curriculum Corporation, provides assessment items from the 2006 National Assessment Program – Science Literacy, to enable teachers to administer these items under similar conditions and gauge their own students’ proficiency in relation to the national standards. This report, prepared for MCEETYA by a project team from Educational Measurement Solutions and Curriculum Corporation, describes the technical aspects of the National Science Literacy Sample Assessment and summarises the main activities involved in the data collection, the data collection instruments and the analysis and reporting of the data. This Information Framework, prepared by the Data Collection and Reporting Subgroup of the MCEETYA Performance Measurement and Reporting Taskforce, specifies the areas to be reported in the National Report on Schooling in Australia (ANR) for 2008, focusing on the priority areas of performance measurement identified by MCEETYA Ministers. The MCEETYA Performance Measurement and Reporting Taskforce prepared these protocols as a working guide for planning and implementing national sample assessments in connection with the national Key Performance Measures (KPMs). They are intended for agencies involved in planning or conducting national sample assessments and personnel responsible for administering associated tenders or contracts. This report, by the MCEETYA Performance Measurement and Reporting Taskforce, supersedes the Measurement Framework for National Key Performance Measures of 2003. Taking account of MCEETYA decisions related to measuring performance against the National Goals for Schooling in the Twenty-first Century, this report sets out a basis for reporting progress towards the achievement of the National Goals by Australian school students, drawing on the agreed definitions of Key Performance Measures. The core of the framework is a schedule setting out Key Performance Measures and an agreed assessment and reporting cycle for the period 2003–2010. This report, by the MCEETYA Performance Measurement and Reporting Taskforce, supersedes the Measurement Framework for National Key Performance Measures of 2005. Taking account of MCEETYA decisions related to measuring performance against the National Goals for Schooling in the Twenty-first Century, this report sets out a basis for reporting progress towards the achievement of the National Goals by Australian school students, drawing on the agreed definitions of Key Performance Measures. The core of the framework is a schedule setting out Key Performance Measures and an agreed assessment and reporting cycle for the period 2003–2011. This report, by the MCEETYA Performance Measurement and Reporting Taskforce, supersedes the Measurement Framework for National Key Performance Measures of 2006. Taking account of MCEETYA decisions related to measuring performance against the National Goals for Schooling in the Twenty-first Century, this report sets out a basis for reporting progress towards the achievement of the National Goals by Australian school students, drawing on the agreed definitions of Key Performance Measures. The core of the framework is a schedule setting out Key Performance Measures and an agreed assessment and reporting cycle for the period 2004–2012. This report, by the MCEETYA Performance Measurement and Reporting Taskforce, supersedes the Measurement Framework for National Key Performance Measures of 2007. The Measurement Framework for National Key Performance Measures sets out a basis for reporting progress towards the achievement of the National Goals by Australian school students by drawing on the agreed definitions of Key Performance Measures. The core of the framework is a schedule setting out the Key Performance Measures and an agreed assessment and reporting cycle for the period 2006–2014. This report, prepared by Gary N Marks, Julie McMillan, Frank L Jones and John Ainley (Australian Council for Educational Research and the Research School of Social Sciences, Australian National University), was one of several reports commissioned by MCEETYA’s National Education Performance Monitoring Taskforce (NEPMT). The report aims to develop a common definition of socioeconomic background to be used for reporting of nationally comparable outcomes of schooling within the context of the statement of National Goals for Schooling in the Twenty-first Century. The report does not necessarily represent the views of either MCEETYA Ministers or National Education Performance Monitoring Taskforce (NEPMT) members. This report, prepared by Julian Fraillon, Australian Council for Educational Research (ACER), was commissioned by the South Australian Department of Education and Children’s Services under the auspices of MCEETYA. The report constitutes Phase 1 of a planned two-phase process. The report defines a measurement construct for student well-being; outlines a methodology for measuring student well-being; and provides recommendations for ongoing work in the measuring, reporting and monitoring of student well-being (Phase 2). The report does not necessarily represent the views of either MCEETYA Ministers or individual State/Territory or Australian Government Education Ministers or departments responsible for education. Professor Samuel Ball, Professor Ian Rae, University of Melbourne and Professor Jim Tognolini, University of New South Wales prepared this report for the MCEETYA National Education Performance Monitoring Taskforce. This report advocates adoption of the PISA “science literacy” definition, for purposes of primary science monitoring in Australia, whereby students would be assessed in relation to concepts chosen from major fields of science and a range of process skills. The report does not necessarily represent the views of either MCEETYA Ministers or National Education Performance Monitoring Taskforce (NEPMT) members. The MCEETYA Performance Measurement and Reporting Taskforce commissioned this report, which was prepared by Dr Roger Jones (Quantitative Evaluation and Design Pty Ltd.). It considers whether it would be possible to improve the accuracy of the coding of detailed parental occupation information by coding it to a higher level of aggregation of the Australian Standard Classification of Occupations (ASCO), and whether providing parents with a set of defined categories of occupation ('self-coding') would provide a satisfactory alternative to seeking detailed information. The purpose of these protocols is to assist in providing guidance to MCEETYA Performance Measurement and Reporting Taskforce (PMRT) members and Sub-groups in relation to seeking approval for the publication of documents of the PMRT, or of its predecessor, the National Education Performance Monitoring Taskforce (NEPMT). These protocols complement the MCEETYA Principles and Protocols Handling of MCEETYA documents and the AESOC Protocols for Publishing Research and Project Reports. This report, by ACER researchers, Geoff N Masters, Glenn Rowley, John Ainley and Siek Toon Khoo, commissioned by the MCEETYA Expert Working Group, provides advice on nationally comparable schools’ data collections and reporting for school evaluation, accountability and resource allocation. The report does not necessarily represent the views of either MCEETYA Ministers or Expert Working Group members. This report, prepared by Martin Murphy and Wolfram Schulz, Australian Council for Educational Research (ACER), was one of several reports commissioned by MCEETYA’s Performance Measurement and Reporting Taskforce (PMRT). The report provides PMRT and its subgroups with information about the sampling process used in surveys conducted under its National Assessment Plan. The report does not necessarily represent the views of either MCEETYA Ministers or Performance Measurement and Reporting Taskforce (PMRT) members. The information and assessment materials in this document were designed to assist teachers to gauge their own students’ proficiency in scientific literacy, and compare the students’ results with the national proficiency levels and standards in scientific literacy at year 6 level. These were published following the National Science Literacy Sample Assessment 2003, which measured the scientific literacy of students across three main areas, on items that related to everyday contexts. The participating students were from both government and non-government schools. This report, prepared by Denis Muller & Associates, was one of several reports commissioned by the MCEETYA National Education Performance Monitoring Taskforce (NEPMT). The report provides a literature review on effective target setting and an analysis of education systems that have included target-setting as part of national or State reporting. The report does not necessarily represent the views of either MCEETYA Ministers or National Education Performance Monitoring Taskforce (NEPMT) members. On 23 April 1998, the Ministerial Council on Education, Employment, Training and Youth Affairs (MCEETYA) agreed to the release of a draft set of revised National Goals for Schooling. In releasing these draft goals, Ministers said that they believed that these draft goals provided “an opportunity to chart a real direction for our children’s schooling as we move into the 21st century”. The existing goals were originally agreed at a meeting in Hobart in 1989, and became known as the Hobart Declaration. Ministers agreed to release the draft goals for a six-month public discussion and consultation period. The National Goals for Schooling in the Twenty-first Century (the Adelaide Declaration) superseded this discussion paper. Following a 1998 review discussion paper, in 1999, the Ministerial Council on Education, Employment, Training and Youth Affairs (MCEETYA) endorsed the statement of Australia's National Goals for Schooling in the Twenty-First Century (the Adelaide Declaration). Comprising national goals for schooling, the Adelaide Declaration establishes a foundation for collaborative action between schools, States and Territories and the Commonwealth, in the development of specific objectives and strategies, including in the areas of curriculum and assessment. In December 2008, these goals for schooling were superseded by the Melbourne Declaration on Educational Goals for Young Australians, setting the direction for Australian schooling for the following ten years. These goals for schooling were agreed to by State, Territory and Commonwealth Ministers of Education, meeting as the 60th Australian Education Council in Hobart, on 14–16 April 1989. Council agreed to act jointly to assist Australian schools in meeting the challenges of our times, and made an historic commitment to improving Australian schooling within a framework of national collaboration. The National Report on Schooling in Australia provides information on progress towards the achievement of the National Goals for Schooling in Australia. These publication procedures detail the writing guidelines, consultation, approval and publication processes for the production of the National Report on Schooling in Australia. This report describes the results of testing conducted during 2000, in which the achievement of students in each of years 3 and 5 was measured against the national benchmarks for reading. Because the national benchmarks represent minimum acceptable standards, MCEETYA Ministers determined that the national goal should be that “all students will achieve at least the benchmark level of performance”. These publications reflect the continuing development of the benchmark reporting process. National Report on Schooling in Australia 2001. Preliminary Paper: National Benchmark Results: Reading, Writing and Numeracy, Year 7. These publications reflect the continuing development of the benchmark reporting process, and form part of the commitment of Ministers for Education to inform the public of progress made towards the achievement of the National Goals for Schooling in the Twenty-first Century. This edition now adds data for Year 7 reading, writing and numeracy. This was released in conjunction with the 2002 Reading, Writing and Numeracy Benchmark Results for Years 3, 5 and 7. National Report on Schooling in Australia 2001. Preliminary Paper: National Benchmark Results: Reading, Writing and Numeracy, Years 3 and 5. These publications reflect the continuing development of the benchmark reporting process, and form part of the commitment of Ministers for Education to inform the public of progress made towards the achievement of the National Goals for Schooling in the Twenty-first Century. This edition now adds data for each of Years 3 and 5, for reading, writing and numeracy. National Report on Schooling in Australia 2002. Preliminary Paper: National Benchmark Results: Reading, Writing and Numeracy, Years 3, 5 and 7. These publications reflect the continuing development of the benchmark reporting process, and form part of the commitment of Ministers for Education to inform the public of progress made towards the achievement of the National Goals for Schooling in the Twenty-first Century. This edition now adds data for each of years 3, 5 and 7, for all three areas, reading, writing and numeracy. This edition was released in conjunction with the 2001 Reading, Writing and Numeracy Benchmark Results for Year 7. National Report on Schooling in Australia 2003. Preliminary Paper: National Benchmark Results: Reading, Writing and Numeracy, Years 3, 5 and 7. These publications reflect the continuing development of the benchmark reporting process, and form part of the commitment of Ministers for Education to inform the public of progress made towards the achievement of the National Goals for Schooling in the Twenty-first Century. This 2003 edition, like that for 2002, adds data for each of years 3, 5 and 7, for all three areas, reading, writing and numeracy and provides new data on the performance of students in metropolitan, provincial, remote and very remote areas. National Report on Schooling in Australia 2004. Preliminary Paper: National Benchmark Results: Reading, Writing and Numeracy, Years 3, 5 and 7. These publications reflect the continuing development of the benchmark reporting process, and form part of the commitment of Ministers for Education to inform the public of progress made towards the achievement of the National Goals for Schooling in the Twenty-first Century. National Report on Schooling in Australia 2005. Preliminary Paper: National Benchmark Results: Reading, Writing and Numeracy, Years 3, 5 and 7. These publications reflect the continuing development of the benchmark reporting process, and form part of the commitment of Ministers for Education to inform the public of progress made towards the achievement of the National Goals for Schooling in the Twenty-first Century. National Report on Schooling in Australia 2006. Preliminary Paper: National Benchmark Results: Reading, Writing and Numeracy, Years 3, 5 and 7. These publications reflect the continuing development of the benchmark reporting process, and form part of the commitment of Ministers for Education to inform the public of progress made towards the achievement of the National Goals for Schooling in the Twenty-first Century. National Report on Schooling in Australia 2007. Preliminary Paper: National Benchmark Results: Reading, Writing and Numeracy, Years 3, 5 and 7. These publications reflect the continuing development of the benchmark reporting process, and form part of the commitment of Ministers for Education to inform the public of progress made towards the achievement of the National Goals for Schooling in the Twenty-first Century. The National Framework for Rural and Remote Education was developed by the MCEETYA Taskforce on Rural and Remote Education, Training, Employment and Children's Services, to: provide a framework for the development of nationally agreed policies and support services; promote consistency in the delivery of high quality education services to rural and remote students and their families; provide reference points and guidance for non-government providers of services and support for education in rural and remote areas and to facilitate partnership building between government and non-government providers of services and support related to the provision of education in regional, rural and remote locations. The National Safe Schools Framework was prepared by the MCEETYA Student Learning and Support Services Taskforce, and endorsed by MCEETYA Ministers in 2003. The National Safe Schools Framework incorporates existing good practice and provides an agreed national approach to help schools and their communities address issues of bullying, harassment, violence, and child abuse and neglect. The original National Safe Schools Framework was endorsed by MCEETYA ministers in 2003. The revised version builds on the original, and was launched by the Hon. Peter Garrett AM MP, Minister for School Education, Early Childhood and Youth on 18 March 2011. The National Safe Schools Framework provides a vision – all Australian schools are safe, supportive and respectful teaching and learning communities that promote student wellbeing – and a set of guiding principles for safe and supportive school communities that also promote student wellbeing and develop respectful relationships. It identifies nine elements to assist Australian schools to continue to create teaching and learning communities where all members of the community both feel and are safe from harassment, aggression, violence and bullying. It also responds to new and emerging challenges for school communities such as cybersafety, cyberbullying and community concerns about young people and weapons. The Framework’s whole school approach to creating safe and supportive learning and teaching communities acknowledges the strong interconnections between student safety, student wellbeing and learning. Harassment, aggression, violence and bullying are less likely to occur in a caring, respectful and supportive teaching and learning community. This discussion paper was prepared by MCEETYA's Schools Resourcing Taskforce, and endorsed by MCEETYA Ministers in July 2006. Among its key findings, the paper notes that the labour force participation rate is lower and the unemployment rate is higher for migrants with lower levels of English proficiency. All Australian governments share the costs and benefits of migration policy, which drives the number and composition of newly arrived students requiring ESL tuition (ESL–NA students). In August 2006, MCEETYA Ministers approved the release of the Statements of Learning for English, Mathematics, Science, Civics and Citizenship, and Information and Communication Technologies (ICT), following their earlier endorsement of the Statement of Learning for English . The Statements of Learning describe the essential skills, knowledge, understandings and capacities that all young Australians should have the opportunity to learn by the end of Years 3, 5, 7 and 9. Statements of Learning, and their Professional Elaborations are designed for use by State and Territory departments of education, or curriculum authorities to guide the future development of relevant curriculum documents. Prepared by the MCEETYA Teacher Quality and Educational Leadership Taskforce, and endorsed by MCEETYA Ministers in July 2003, this framework provides an architecture, within which generic, specialist and subject-area specific professional standards for teaching can be developed at national and State and Territory levels. The framework complements the National Goals for Schooling, providing an agenda for strategic action on teaching and learning policy at the national level. The MCEETYA Quality Sustainable Teacher Workforce Working Group commissioned this research report by Nexus Strategic Solutions. The report documents existing performance management and development policies and practices in Australian schools to enable sharing examples of best practice. The Rewarding Quality Teaching report, prepared by Gerard Daniels consultancy, was commissioned for MCEETYA to inform the development and implementation of new teacher pay arrangements. The Career and Transition Services Framework, prepared by the MCEETYA Taskforce on Transition from School, and endorsed by MCEETYA Ministers in July 2003, is a guide to assist jurisdictions in planning for, and providing services to support and prepare young people to make successful transitions through school and between school and post-school destinations. The Framework expands upon many of the concepts in the Ministerial Declaration, Stepping Forward: improving pathways for all young people and its Action Plan. The Ministerial Declaration, Stepping Forward: Improving Pathways for all Young People, was endorsed by MCEETYA Ministers in July 2002. The declaration outlines Ministers' commitment to young Australians and provides a common direction for improving social, educational and employment outcomes for all young people. Several related publications accompany the Stepping Forward declaration, namely, a Checklist for New Initiatives; an Action Plan; Key Areas for Action; and Sharing What Works: a collection of case studies. The Ministerial Declaration, Stepping Forward: Improving Pathways for all Young People, endorsed by MCEETYA Ministers in July 2002, outlines Ministers’ commitment to young Australians and provides a common direction for improving social, educational and employment outcomes for all young people. As part of the declaration, the Checklist for New Initiatives is a practical guide to ensure that initiatives aimed at supporting young people's transitions complement and build on each other. The Action Plan to implement the Ministerial Declaration, Stepping Forward: Improving Pathways for all Young People, was prepared by the MCEETYA Taskforce on Transition from School, and endorsed by MCEETYA Ministers in late 2002. The Action Plan outlines a vision in which young people are assisted to attain goals and aspirations, and describes a national approach to youth transitions underpinned by five themes. It is a companion document to the Ministerial Declaration, Stepping Forward: improving pathways for all young people. Forming part of the Stepping Forward Ministerial Declaration: Action Plan, the Key Areas for Action is a table, prepared by the MCEETYA Taskforce on Transition from School, listing areas of activity that jurisdictions had underway in 2002, to respond to the declaration's goals for improving the social, educational and employment outcomes for all young people. MCEETYA Ministers endorsed this strategy at their meeting in Brisbane, 1996. A national project of the Australian National Training Authority (ANTA), under the auspices of the MCEETYA VEET Women’s Taskforce, the strategy sets a direction for governments, industry and training providers to ensure that the needs of women are consistently addressed as a priority in policy making, planning, resourcing, implementing and monitoring vocational education and training. Please note, as this report is now out of print, this PDF is a scanned copy of the print version. This is a suite of eleven research papers commissioned by the MCEETYA Performance Measurement and Reporting Taskforce (PMRT) and developed by Professor David Andrich and Murdoch University, to resolve a range of issues associated with the assessment of writing and, in particular, variation in marking and differences in the marking keys. The project grant tested the hypothesis that, in the reporting of writing benchmark data, each year's test results caused variations to the length of the reporting scale. This resulted in considerable problems with equating and unexpected variations in the percentage of students reported as achieving the benchmark. The research does not necessarily represent the views of either MCEETYA Ministers or PMRT members. The MCEETYA Performance Measurement and Reporting Taskforce (PMRT) prepared this manual to provide information to assist schools and school systems to collect student background information, as required by Education Ministers. It is intended for schools and school systems, when enrolling students for the first time in the 2007 school year, or when collecting information, via special data collection forms, on those students who are involved in national testing in 2007. This report, prepared by the University of Queensland School of Education and KPA Consulting Australia, and commissioned for MCEETYA, discusses the practices, processes, strategies and structures that best promote “lifelong learning” and the development of “lifelong learners” in the middle years of schooling. The report presents the findings of a project undertaken to address the broad question of how to ensure the engagement with learning of all middle years students, and how to encourage in them a higher order of learning objectives and outcomes, both now and throughout life. The National Code of Practice for Sponsorship and Promotion in School Education was developed by a working party of the Australian Education Council (AEC), in conjunction with the Industry Education Forum and the Business Council of Australia, parent and school council organisations and teacher unions established to examine school–industry links. The joint working party was established at the AEC’s 1991 meeting, in Melbourne, and Ministers for Education considered the National Code at their 1992 meeting, in Auckland, New Zealand. This national code of practice is intended to guide participants in sponsorships and promotions towards the most constructive practice in this field, maximising the important educational benefits that can be obtained, and avoiding activities that are not consistent with good educational practice. This report was commissioned for MCEETYA by the Queensland Department of Education and the Arts, and prepared by Stephen Lamb, Anne Walstab, Richard Teese and Margaret Vickers (of the Centre for Post-compulsory Education and Lifelong Learning at the University of Melbourne) and Russ Rumberger, the University of California, Santa Barbara. The report aims to identify the main drivers of current trends in retention rates across States and Territories, and to develop a set of models to predict differences in patterns of retention. The research does not necessarily represent the views of either MCEETYA Ministers or individual State/Territory or Australian Government Education Ministers or departments responsible for education. This report was commissioned for MCEETYA by the Queensland Department of Education and the Arts, and prepared by Stephen Lamb, Anne Walstab, Richard Teese, Margaret Vickers and Russ Rumberger of the Centre for Post-compulsory Education and Lifelong Learning at the University of Melbourne. The report aims to identify the main drivers of current trends in retention rates across States and Territories, and to develop a set of models to predict differences in patterns of retention. This represents a summary report of the research. The research does not necessarily represent the views of either MCEETYA Ministers or individual State/Territory or Australian Government Education Ministers or departments responsible for education.
2019-04-18T14:41:23Z
http://www.educationcouncil.edu.au/archive/Publications/Publications-archive.aspx
EPSOM, England - June 11, 2015: The new Toyota Avensis aims to make things easy for the customer, with new style, new engines and innovative technology. Regardless of version or equipment grade, it is designed to be prestigious and trustworthy. Designed and engineered in Europe and built exclusively in Britain, more than 1,711,800 Avensis across four model generations have appeared on European roads since the original was launched at the end of 1997. Quality durability and reliability have always been among Avensis’s strongest suits, reflected in the current model achieving consistently high QDR ratings and the highest score in last year’s J.D. Power Vehicle Ownership Satisfaction Survey in Germany. Although Avensis has traditionally appealed to private customers, it’s the fleet market that easily commands the largest proportion of sales, accounting for 75 per cent of D-segment business in Europe. • Styling: a new, more prestigious and dynamic exterior design with LED lamp technology. • Sensory quality and comfort: an all-new, more elegant and refined interior, with an emphasis on significantly improved sensory quality, comfort, trim, finishes and NVH. • Safety: a comprehensive upgrade of safety systems, targeting a five-star Euro NCAP rating. A focus on active safety technology introduces the new Toyota Safety Sense package, provided as standard. • Equipment and value for money: a new, more clearly differentiated grade structure with class-leading standard equipment levels and advanced technologies, such as the Toyota Touch 2 system and an eight-inch multimedia touchscreen. • Driving pleasure and running costs: two new Euro 6 diesel engines with extended service intervals, lower CO2 emissions across the entire range, lower servicing costs and chassis enhancements to improve ride and handling. Toyota anticipates that these improvements will not only promote greater loyalty within the model’s existing customer base, but will also increase Avensis’s appeal to new customers as a genuinely attractive proposition. Jon Corpe from Toyota Manufacturing UK’s Burnaston plant, speaking two weeks ahead of the start of production, talks about how the factory prepared for the simultaneous introduction of new Avensis and new Auris. The simultaneous start of production of two new models – Avensis and Auris – is a first for TMUK. We don’t have a dedicated Avensis line; the car is built alongside Auris on the same production line, including weld shop, paint shop and so on. Normally, as production of an outgoing model slows, the new model takes over on the line to compensate, so the number of cars leaving the plant remains fairly constant. In this case, however, production of both models comes to an end at the same time, then both new models start. This simultaneous start of production means we have to ramp up from zero to 800 vehicles a day over a period of just 10 days. Achieving this presents two major challenges. The first is to set up the plant and the second is to meet the global demand for the new vehicles. Both models are going to market immediately, so we have to do this in a very short period of time. Even during production start-up we will be building up to 280 new Avensis a day. The two new models mean we need to carry out a significant update across the entire plant. Press, weld and paint shops have new tooling and jigs to support body manufacture and painting, and the substantial number of improvements to the vehicle, in areas such as safety and handling, require additional parts and processes. But the greatest level of change is in the plastics and assembly shops. In plastics, most of the processes have been affected by the introduction of the new vehicles, calling for new moulds and great deal of new tooling and equipment. In the assembly shop, more than half the processes are new, and the introduction of Euro 6 emissions standards has meant an update for our entire engine range for both models, plus, of course, the addition of three brand new engines. The new engines and the introduction of Toyota Safety Sense add a significant new dimension to the checks made in quality assurance. For instance, we have had to install new road markings and signage on our test track to confirm the real-world functionality of the new lane-keeping, sign recognition and pre-crash systems. This means our workforce – TMUK has more than 2,000 people operating on a rotating day and night shifts – has had to train to build two different models at once. Nothing remains the same. The Standardised Work, the foundation of our production method, has had to be rewritten. The specialist equipment has had to change and the number of parts we have to manage has doubled. It’s a big challenge. Each member must be equally skilled in building every variant of both models. Because the cars are built to customer order, they don’t come down the production line in batches, it’s just one line with a huge variation in product. Each process in the plant has what we call a Takt time. The Takt time we’re currently working to means that a car will drive off the production line every 66 seconds, and everyone’s process, wherever they are working in the plant, is designed around that 66-second timeframe.. This means we have had to redesign every process so that its work content can be accomplished in 66 seconds, then, before the start of production, train every member so that they can complete their process on time, every time, without error. It’s a huge task. Obviously we have to manage to lots of equipment as well, including any new tooling. And, with two vehicles changing, the number of parts we have to change doubles. A supplier’s work can double too, if they are supplying parts for both models. TMUK has been involved in the new Avensis programme from the very early stages, influencing the design to ensure the quality of the finished product. We call that first phase “design and development”. Working alongside design and R&D, we study new designs and even build a digital car using CAD to make sure the new vehicle can be built and quality assured. Then we are involved in the confirmation assembly build of the very first vehicles to make sure quality can be built-in. We assemble a number of vehicles with the designers present and study the build process as we go. It’s quite a long, intensive process, during which we’re also writing and fine-tuning our standardised documentation on how to build the car. During this phase we look at the vehicle from four different perspectives. We’ll study each part – for Avensis alone the list numbers around 3,000 – make sure our tools and equipment are suitable, make sure the members can build it, and make sure the build method is correct. Because of our early involvement in the programme, we perfected many aspects of the assembly – the body build, the robot teaching and so on – early last year. Nonetheless, we still have a great deal to do. The plant will be empty for only a very short time – just a weekend – during which we’ll complete a very quick changeover of all the equipment we weren’t able to change during preparation, re-stock with new parts and begin the ramping up process. Then we will have just 10 days to accelerate our production from precisely zero to 800 cars a day. Both the new Avensis saloon and Touring Sports wagon have a more distinctive and dynamic appearance, generated by a new Energetic Elegance design theme. Overall vehicle length has been increased by 40mm and at the front there is a strong new look that marks a “second generation” evolution of Toyota’s contemporary design language. The Toyota emblem is set more prominently within a smaller but sharper-styled upper grille. The grille itself has a chrome trim bar which anchors new LED headlamp clusters. These incorporate LED daytime running lights, giving new Avensis an individual illumination signature. The lower grille has been made significantly larger and it spans a centre bumper section finished in gloss black, reinforcing Avensis’s solid front stance. In conjunction with the downward sweep of the new grille, the fog lamp housings have been pushed out to the far edges of the bumper, making the vehicle appear broader. To the side, a new garnish along the sill creates a strong horizontal emphasis, giving the impression of a lower centre of gravity. The more elegant profile is supported by new alloy wheel designs. The rear of the vehicle has also been designed to add elegance and impact to the broad road stance. The rear light clusters use LED light guide technology to generate a high-tech lighting signature. European product planning’s role is to ensure the smooth translation of market requirements into the work of our engineers and designers. By leading the Avensis project in Europe, with European development and design, we can make sure the expectations of European customers are met. This is especially the case for fleet customers. This is very important to this project, because fleet business represents 75 per cent of segment sales. Today, the decline in the D-segment is mostly due to falling private sales; fleet sales remain strong, so overall this remains an important part of the European market. We research fleet company car drivers – the user-choosers – intensively. We identified all their key requirements to ensure nothing is missing when they come to select their next company car. Those requirements are a strong interior and exterior style, value for money, sensory quality and equipment. But before that, it is the fleet manager who decides whether a car is added to the vehicle list in the first place. And for them, total cost of ownership is important. That means low CO2, low fuel consumption, high residual value, long service intervals and, of course, high levels of quality, durability and reliability. Our research also tells us that safety is a key concern for fleet managers. This means we must offer a car with a five-star Euro NCAP rating and the very latest in active safety equipment. This why the new Toyota Safety Sense system is standard on new Avensis. Avensis is a key product in Toyota’s fleet strategy, because it lets us provide fleet customers with a one-stop shop solution. Fleet managers like to find a single supplier that can offer a comprehensive product range, including both passenger cars and commercial vehicles. Having an offer in the D-segment is fundamental to that approach. But there are other reasons for our investment in new Avensis: we consider this car to be the Toyota brand flagship. It’s not only an aspirational product for both existing and potential Toyota customers, it is also a key profit contributor to the Toyota network in Europe, particularly in regions such as Scandinavia. On the other hand, Avensis is a strong contributor to the brand in areas such as prestige. It is a sign of the very strong customer loyalty it generates that in households where there is an Avensis, we find there’s a high probability that the second car in the family will also be a Toyota. So, in broad terms, Avensis customers are looking for more luxury, more comfort and more high-tech equipment. But we have to distinguish between the private and fleet customers. Private customers are looking for a certain type of styling, which we’ve identified through our research as including elements of prestige, elegance and luxury. They are also looking for value for money. The fleet customers, who are on average younger and much less brand-loyal, have different expectations when it comes to styling. They, too, want value for money, with a focus on high equipment levels. So the design we produced for Avensis offers a good balance between sportiness and elegance, as required by the differing tastes of the private and fleet markets. The new Toyota family face is important to ensure consistency with our other vehicles, but we also wanted Avensis to stand out, immediately noticeable as our flagship. Knowing that fleet customers often also have premium brands on their shopping list, we targeted new levels of sensory quality to create what we call a “one grade up” feeling. We achieved this through perfect consistency of colour, materials, character lines and backlighting. We use highly tactile materials for the door and instrument panels, adding a new level of richness to the touch. We have also introduced Alcantara to the seat upholstery as standard from mid-grade (Business Edition). In all, these measures help us achieve an interior which we feel offers one of the best perceived values in the class. Equipment levels are also very important for European customers. We have best-in-class safety through the standard provision of Toyota Safety Sense, and in terms of HMI there is the eight-inch Toyota Touch 2 screen and 4.2-inch colour TFT multi-information display – both standard from Business Edition grade. The final element is dynamic improvement, where we focused on what we know is important for fleet customers, who spend long hours at the wheel. We have concentrated on seat comfort – both overall comfort and holding performance – and on NVH improvements. From a powertrain perspective, fleet business in the D-segment is almost exclusively diesel and this is why it was so important to update our offer; new 1.6 and 2.0-litre units which we know are positioned in the core of the segment and are very competitive in terms of the relationship between CO2 emissions and performance. Where the new 1.6 D-4D engine is concerned, CO2 is down by 11g/km and for the 2.0 D-4D it’s 24g/km, compared to the previous generation. Those are significant improvements, not just fine tuning. With the 1.6 diesel we’re entering the small engine sub-segment with Avensis, an area that is growing very rapidly due to CO2-based taxation schemes. Again, this is something very important for fleet. However, despite following this trend for downsizing, the 1.6 D-4D is not an entry, eco-version. Out engineers have been able to maintain the high driveability and comfort expected of a D-segment model. In some markets, private sales are more biased towards petrol. For Avensis we have kept our existing petrol engines, but significantly improved fuel efficiency and the CVT transmission, in terms of fuel efficiency and driveability, particularly in city traffic. One of Toyota’s aims in creating the new Avensis was to guarantee comfort and convenience across the range. The elegant, refined interior makes an important contribution to this goal: sensory quality and NVH have been taken to a higher level and there are new, premium quality trim finishes. The instrument panel is divided into two sections. The sleek, full-width upper part contains an instrument binnacle with recessed tachometer and speedometer dials either side of a large, 4.2-inch TFT multi-information display (colour display on higher grades, monochrome on Active grade). The lower section houses a centre console that is separated from the transmission tunnel and dominated by an eight-inch full colour touchscreen. A redesigned steering wheel and gear lever complete the driver’s cockpit. Switchgear feel and operation have been improved and sensory quality has been further heightened by harmonising graphics and symbols and providing uniform, blue back-lighting. Satin chrome highlighting on the instrument binnacle, steering wheel, console switchgear, air vents and gear lever presents a crisper, higher quality appearance. A new range of more appealing interior finishes is available, including combinations of fabric or leather with Alcantara seat upholstery (a first for the Avensis’s class) and a new, Dual Ambient light grey colour scheme. The cabin is further improved with the introduction of a new front seat design, making for more comfortable long-distance travel – a priority for fleet/business customers. The size of the upper backrest has been increased and the backrest bolsters have been redesigned, giving both extra shoulder support and better lateral holding performance. The seat suspension mat has also been redesigned to improve pressure distribution and reduce long-haul fatigue. The cushion angle has been increased to give better thigh support and the cushion side bolsters have been reshaped. NVH levels have been significantly reduced, ensuring that the increase in new Avensis’s interior quality is matched by a perceptible decrease in cabin noise. New and thicker materials provide additional sound absorption and insulation, and seal quantity, thickness and width have been increased throughout the body shell. The thickness and density of the bonnet insulator and the thickness and size of the of the engine under-cover insulator have been increased. Diesel models further benefit from the addition of an underbody damping sheet. A polyurethane foam over-moulding has been incorporated in the wing protector and further detailed changes have been made to sound-absorbing elements in the upper instrument panel and dashboard. Air conditioning noise has been reduced by improving the heater’s air duct seal. Touring Sports versions fitted with a Skyview panoramic roof gain a damping sheet in the roof lining. Avensis adopts a new grade structure which reflects how it has been designed, engineered and equipped to meet the leading requirements of both private and business customers. The established Active and Excel grades mark the entry point and top of the range. These are joined by new Business Edition and Business Edition Plus versions at the heart of the line-up. Across the board, specifications secure high levels of safety, comfort and convenience, not least with the standard provision of Toyota Safety Sense integrated active safety features on all models (further details in the Safety chapter below). Key features of the Active grade include Pre-Crash Safety system with Autonomous Emergency Braking, cruise control, air conditioning, six-speaker CD/radio audio, Bluetooth, auto-dimming rear-view mirror, LED rear and daytime running lights and power windows. Business Edition adds to this strong foundation with the Toyota Touch 2 with Go touchscreen multimedia and navigation system, digital/DAB audio package with eight-inch display, reversing camera, front fog lights, rain-sensing wipers, dusk-sensing headlights, automatic air conditioning, 17-inch alloy wheels and part-Alcantara seat upholstery. The active safety features extend to Automatic High Beam, Lane Departure Warning and Road Sign Assist. Business Edition Plus delivers further premium features including leather upholstery with Alcantara inserts, front fog lights with a cornering function, LED headlamps, smart entry and rear privacy glass. The LED daytime running lights gain light guides, creating a distinctive lighting signature. Customers can choose from 10 colours, with the metallic options including new Havana Brown and Orion Blue shades. Option packs are also available: the Protection Pack (£325) provides mud flaps, scuff plates, boot liner and rear bumper protector (plus additional load space rails for the Touring Sports if required); the Chrome Pack (£250), available for Avensis for the first time, adds chrome side sill and boot/tailgate trim; and the Parking Pack (£495) equips the car with front and rear parking sensors. Clear, instant driver information is displayed on a new 4.2-inch colour TFT screen, set between the principal meters in the instrument binnacle. The range of data includes audio, phone, navigation and safety functions, including active safety system status and warnings. The display’s design allows multiple information sources to be presented simultaneously. The TFT display is in colour on all new Avensis models apart from Active grade, which uses a monochrome version. All Avensis models are covered by Toyota’s five-year/100,000-mile new vehicle warranty. Revisions to front and rear suspension, improved steering feel and responsiveness. Toyota has comprehensively revised the powertrain line-up for new Avensis, building on its reputation for reliability and durability and providing customers with the benefits of lower fuel consumption, emissions and ownership costs. At the same time, enhancements to the body structure, suspension and power steering deliver improvements in ride comfort and handling performance. The UK range features two new diesels: a 1.6-litre D-4D and, making its first appearance in a Toyota, a 2.0-litre D-4D unit. The 1.6 D-4D generates CO2 emissions of just 108g/km, 11g/km less than the 2.0-litre unit it replaces. The new 2.0-litre engine’s 119g/km represents a 24g/km reduction on the performance of the outgoing 2.2-litre unit. An increase in service intervals to 12,500 miles and a reduction of about 20 per cent in the 36,000-mile/three-year servicing costs for both units have helped make the diesels cheaper to run. In tune with the current move to downsize powerplants to achieve better fuel economy, lower emissions and better driving dynamics, Toyota is replacing Avensis’s current 2.0-litre D-4D diesel engine with a new 1.6 D-4D. This Euro 6-compliant engine, working with a six-speed manual transmission, is 20kg lighter than its predecessor. It develops 110bhp/82kW at 4,000rpm and 270Nm of torque from 1,750 to 2,250rpm. This gives 0 – 62mph acceleration in 11.4 seconds and a top speed of 115mph. The engine posts an eight per cent improvement in fuel efficiency compared to the previous 2.0 D-4D, with combined cycle fuel consumption of 67.3mpg; at the same time, CO2 emissions have been reduced from 119 to 108g/km. The engine has been tuned for fast throttle response throughout the rev range. It generates good initial response at low rpm, then, as turbo boost develops, a linear build-up of torque. The availability of torque has been extended, so the engine will rev freely beyond 3,000rpm without running out of breath. The new Euro 6-compliant 2.0-litre D-4D shares the low fuel consumption and emissions performance of its 1.6-litre sister unit, but it has been tuned for a stronger focus on performance. It develops a maximum 141bhp/105kW at 4,000rpm and a generous 320Nm of torque from 1,750 to 2,500rpm. Its linear torque build-up and willingness to rev gives it particularly strong in-gear responsiveness and acceleration: it will move the car from rest to 62mph in 9.5 seconds and reach a top speed of 124mph. Numerous developments, including a new timing chain design, ensure quiet running at all speeds, and Toyota’s stop and start technology, coupled with a tall sixth gear for motorway cruising, helps the 2.0 D-4D return average fuel consumption of 62.8mpg with 119g/km CO2 emissions. Both the new diesel engines benefit from numerous advanced technologies that help minimise fuel consumption and emissions without detracting from engine performance and driving pleasure. A fuel injection control system uses Digital Diesel Electronics to control injection in line with engine speed, load and temperature to gain more precise control of pressure, timing and volume than can achieved with conventional common rail technology. This allows for better fuel efficiency and compliance with stricter emissions regulations to be achieved with no detriment to engine performance. A combination of swirl and tangential intake ports creates an ideal swirl pattern in the intake air/fuel mixture, promoting more complete combustion and, hence, greater engine efficiency. The camshaft has a built-up construction, comprising individual cams, gears and shaft. Each component is made from a carefully selected combination of materials. This design approach reduces overall weight by around 40 per cent, a saving that contributes to overall fuel efficiency. The Hydraulic Valve Clearance Compensation system features hydraulic pistons that continuously adjust individual intake and exhaust valve clearance according to engine speed and load. This optimises intake and exhaust airflow for better engine performance and fuel efficiency. A cross-flow cooling system channels engine coolant flow from the hotter exhaust side to the cooler intake side for more even heat distribution over the cylinder head. This reduces pressure losses and enhances fuel efficiency. Together with its sound-absorbing properties, the resin cylinder head cover allows – thanks to its ease of manufacture – for a more complex inner structure. This means the oil separator and pressure control valve have been built into the cover, to separate the oil from the blow-by gas. This reduces the amount of oil burned during re-combustion, reducing emission impurities. A new charging control system automatically regulates the amount of electricity generated by the alternator, which affects the amount of load on the engine, according to driving conditions. The system increases alternator load under deceleration and decreases it under acceleration, and can also balance fuel efficiency with the electricity needed when the engine is idling, or at cruising speed. Both engines are equipped with Toyota’s stop and start system and a high-performance diesel particulate filter, further reducing particulate and CO2 emissions. The four-cylinder, 1,798cc, 16-valve DOHC engine develops 145bhp/108kW at 6,400rpm and 180Nm of torque at 3,800rpm. Matched to a six-speed manual transmission, it will accelerate the Avensis from 0 – 62mph in 9.4 seconds (10.4 with CVT) and on to a 124mph top speed. Combined cycle fuel consumption has improved to 47.1mpg and CO2 emissions have fallen by 14g/km to 139g/km – band E for road tax/Vehicle Excise Duty (figures for saloon with 16-inch wheels). When matched to the Multidrive S CVT, the benchmark figures mark a similar improvement at 138g/km and 47.9mpg . The operating angle of the Valvematic and VVT-i systems has been increased to optimise valve lift angle and timing across the driving range. As a result, power output has been increased and mechanical losses reduced, thus improving fuel efficiency. Continuous optimal throttle control in accordance with Valvematic and VVT-i operation further improves both fuel efficiency and driveability, and the addition of an oil temperature sensor enhances VVT-i performance for a further gain in fuel economy. The compression ratio of the 1.8 Valvematic engine has been increased to 10.7:1, enhancing thermal efficiency, and the fuel system benefits from changes to fuel injection and timing, reducing fuel loss to the exhaust side under injection. Friction has been significantly reduced by fitting a tension-reducing ribbed V belt auto-tensioner and a low-friction timing chain and chain damper; the adoption of Teflon coatings to front and rear oil seals and resin coatings to the sliding surface of the crankshaft, camshaft and thrust bearing; and the reduction of both oil pump flow and vacuum pump drive torque. Engine warming performance has been improved by using a shell-type exhaust manifold and the optimisation of valve timing and fuel injection quantities. This accelerates the increase in exhaust gas temperature, warming the catalyst quickly to reduce emissions from cold starts. The cooling system has been improved through the installation of a high-response thermostat and precise electric fan control. These measures improve anti-knock performance and, hence, fuel efficiency. Compatibility with high sulphur fuels has been achieved through a nitride treatment on the positive crankcase ventilation valve and a height change to the piston rings. Finally, both units also benefit from the previously described Charging Control System. A further four per cent improvement in fuel economy has been realised through extensive revisions to the Multidrive S continuously variable transmission that is optionally available with the 1.8 engine. These include new torque converter, continuously variable unit, oil pump, reduction and differential gears, hydraulic control unit and CVT fluid warmer. The CVT control logic has also been adjusted to reduce engine revving at medium throttle settings, more closely matching engine speed to throttle inputs, like a conventional automatic transmission. The new Avensis’s bodyshell has been rendered more rigid by the application of additional spot-welding points and the use of a high-strength urethane windscreen bonding material. The car retains the proven MacPherson strut front and double wishbone rear suspension design of its predecessor, but both elements benefit from significant changes to improve ride comfort and handling. These include a change from resin to steel for the bearing material, reducing friction and so improving steering feel and feedback. At the front there is a new strut bearing and support, a reduction in the spring rate and an increase in the spring side-load compensation. Damping force has been tuned and, in the case of diesel models, the springs’ shape has been changed and their rates adjusted in favour of comfort. . Similar adjustments have been made to the rear, together with a new piston valve design that gives a perceptible improvement in ride comfort. Steering feel and responsiveness have been improved by the use of a new intermediate shaft, a change in the diameter of the anti-roll bar and an increase in body shell rigidity through the use of high-strength urethane bonding for the windscreen. Changes have also been made to the electric power steering’s assistance characteristics. The neutral position is more accurate for better high-speed straight line driving, and steering torque delivery has been fine-tuned to better match linearity with lateral acceleration and yaw response. Hill-start Assist Control adds further benefits. This applies brake pressure to all four wheels for a maximum two seconds when the driver comes off the brake pedal to apply the throttle. This prevents the vehicle from rolling backwards when pulling away on a steep or slippery incline. From the European R&D perspective, this was a milestone project. Avensis is a Europe-unique vehicle, built in Europe. We had a great deal of previous involvement with the model and knew that we could develop it further and perfect it, particularly in the context of fulfilling the requirements of fleet customers. Our management in Japan told us “you make the business case, you decide what you can do, and you will have to take on the majority of the workload from day one”. That means Toyota Motor Europe was involved in a number of activities where we had no previous experience. So, to adapt to that new position, we had to develop our organisation in parallel with the project itself. From a resource point of view, certain elements were not in place. For instance, this was the first time we worked with an external company to help us out on the engineering side. This was an important step for us, showcasing the fact we can have the flexibility to develop projects, even if we don’t have the resources there, ready from day one. So people joined and then left as the project progressed, helping us with electronics, body design, engineering and other key development areas. I started planning the project at the start of 2012, spending the first six months simply deciding what we wanted to achieve and how to get there. By August we had a scenario and a business case ready for the changes we wanted to make. If you look at the new model, you’ll see that a lot of the change content is of the type we usually only do for an all-new vehicle, such as a full interior and combi-meter development. These don’t fit in with our typical minor change schedules, so it was quite a challenge to work within the short timescale we had set ourselves. This is the first time we’ve carried out such a major content change in such a short time. We’re talking plant investment of 36 million euros (about £26 million), 368,000 man-hours and more than 1,000 vehicle part changes. A certain percentage of Avensis production is for what we call general export, and this vehicle is also imported into Japan. It very much appeals to Japanese customers as a “European-style” vehicle and they are prepared to pay a premium for certain specification that’s not available in their domestic market. The European-specification vehicle takes the lead in style, sensory quality, grade structure and safety; every other specification for other markets is a derivative of this. At the start of the project, we agreed on a number of focus items. We had the luxury of the vehicle already being in the market, so we could talk to our network of national importers, dealers and customers, and make a shortlist of the items we should focus on. We were asked for improved sensory quality and comfort, more dynamic styling, a more flexible equipment line-up – especially in the context of fleet – and, of course, safety. That last is a big one, because in order to tackle the fleet market, we had to ensure Avensis has a five-star Euro NCAP rating. This meant redeveloping the vehicle to meet the crash test programme’s 2015 requirements, which was quite a big challenge. The European Sensory Quality Division was involved in the planning from day one. Previously we were just making sure we had the same surface finish, colour quality and so on. On this project we looked in far greater depth, for instance at shape and symbol consistency. During the early styling reviews, the SQ team was already giving feedback on shapes and materials to ensure we didn’t use too many, or have any mismatches. The exterior styling was also a first for TME, because we changed the sheet metal and, in the cabin, we took responsibility for developing the new instrument panel. This is the first time we’ve tackled an interior at this level. Where equipment is concerned, one of our key aims was for better integration into the vehicle, as some customer feedback suggested it currently came across as lacking in overall co-ordination. So we have ensured the display/audio, combi-meter, heating/ventilation and other systems such the pre-crash all talk to each other and are fully integrated, not just a box-by-box installation. Previously, the engineers for each of these systems worked with different colours and symbols. Now we have designed a master to ensure all symbols, fonts and new blue illumination match on each element, from steering wheel to centre console and so on. Although everyone talks about the new diesel engines being supplied by BMW, I should point out that the combi-meter, the pre-crash system and quite a number of critical safety items were also sourced from European suppliers. Of course the new 1.6 D-4D engine was first installed in Verso, which was when we had to address all the main installation challenges. With Avensis we could focus more on driveability and comfort, especially from the perspective of fleet customers. BMW worked closely with us on our driving events to help fine-tune these aspects of performance. There were some heat management issues, but the bigger challenge was installing the 2.0-litre diesel unit in a Toyota for the first time. We were very focused on the engine’s driveability. We still wanted it to have the Toyota family feel, but for there also to be a clear differentiation between it and the 1.6, so people could clearly recognise the different merits of the two units. We developed the Avensis with the target of achieving a five-star rating from Euro NCAP. Because of the big changes in the organisation’s test criteria, this presented a significant engineering challenge, especially where pedestrian protection was concerned. For driving pleasure and comfort, we focused on NVH intrusion in the cabin and seat comfort. We redesigned the seat to ensure that fleet customers who spend a long time behind the wheel don’t become uncomfortable or tired when driving long distances. From the driving dynamics perspective, we have a new shock absorber supplier. That gave us new opportunities to tune damping in a different way, and we have specifically tuned the vehicle to best suit the European market. For instance, the rear end of a Toyota is traditionally tuned to be quite stable, because a feeling of stability is extremely important to Japanese customers. In Europe, customers are more concerned about a car’s agility and sharpened steering feel. So we have been able to shift the balance of the vehicle more towards the European dynamic style, while complementing this with a stiffening-up of the body shell. As ever, though, there is a difference between the requirements of private and fleet customers. The fleet customer spends much more time on the road, so favours a more dynamic driving style and long-haul comfort. For the private buyer, the emphasis on handling is not so high. Having a good-looking car and value for money are more important to them than the last word in driving dynamics. That being said, it isn’t efficient to focus on just one area of vehicle development. For Europe, the overall balance of Avensis is more important than any single aspect. If the customer gets behind the wheel, they shouldn’t be distracted by one very good thing, or one very bad thing. They will spend a lot of time in the car and simply want to feel completely at ease. Package on all models includes Pre-Collision System with Autonomous Emergency Braking. In anticipation of securing a five-star rating in the Euro NCAP crash test programme, new Avensis takes active safety and driver assistance to new levels with Toyota Safety Sense. On all models this provides a Pre-Collision System with Autonomous Emergency Braking. On all bar the entry-level Active grade the package further includes Lane Departure Alert, Automatic High Beam and Road Sign Assist, functions which process information gathered by a compact laser and camera unit mounted on the head of the windscreen. The Pre-Collision System operates at speeds between approximately six and 49mph, detecting vehicles on the road ahead and reducing the risk of a rear collision. When it determines an impact risk, it triggers visual and audible alerts to prompt the driver to apply the brakes. At the same time, it primes the car’s braking system to deliver extra stopping force when the driver presses the brake pedal. If the driver fails to react in time, the system automatically applies the brakes, reducing speed by about 19mph, or potentially bringing the car to a stop, to prevent a collision or mitigate the force of impact. The Lane Departure Alert system monitors lane markings on the road and helps prevent accidents and head-on collisions caused by a vehicle leaving its lane. If the vehicle starts to deviate from its lane without the turn indicators being used, the system alerts the driver with visual and audible warnings. The Automatic High Beam helps ensure excellent forward visibility when driving at night. It detects both the headlights of oncoming vehicles, and the tail lights of vehicles ahead, automatically switching between high and low beams to avoid dazzling other drivers. As high beam is used more frequently, pedestrians and obstacles are easier and quicker to spot. Road Sign Assist helps ensure drivers are kept informed, even if they have driven past a road sign without noticing. It recognises signage such as speed limits and “no overtaking” warnings, and displays the information on the TFT multi-information screen in the instrument binnacle. If the driver exceeds the speed limit, the system will activate a warning light and buzzer.
2019-04-22T07:13:45Z
https://www.theautochannel.com/news/2015/06/11/133654-toyota-avensis-comfort-and-prestige.html